Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Intensive Care Unit Quality Improvement: A "How-To" Guide For The Interdisciplinary Team

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

Review Article

Intensive care unit quality improvement: A how-to guide for the interdisciplinary team*
J. Randall Curtis, MD, MPH; Deborah J. Cook, MD; Richard J. Wall, MD, MPH; Derek C. Angus, MD, MPH, FRCP; Julian Bion, FRCP, FRCA, MD; Robert Kacmarek, PhD, RRT; Sandra L. Kane-Gill, PharmD, MSc; Karin T. Kirchhoff, RN, PhD, FAAN; Mitchell Levy, MD; Pamela H. Mitchell, PhD, CNRN; Rui Moreno, MD, PhD; Peter Pronovost, MD, PhD; Kathleen Puntillo RN, DNSc, FAAN

Objective: Quality improvement is an important activity for all members of the interdisciplinary critical care team. Although an increasing number of resources are available to guide clinicians, quality improvement activities can be overwhelming. Therefore, the Society of Critical Care Medicine charged this Outcomes Task Force with creating a how-to guide that focuses on critical care, summarizes key concepts, and outlines a practical approach to the development, implementation, evaluation, and maintenance of an interdisciplinary quality improvement program in the intensive care unit. Data Sources and Methods: The task force met in person twice and by conference call twice to write this document. We also conducted a literature search on quality improvement and critical care or intensive care and searched online for additional resources. Data Synthesis and Overview: We present an overview of quality improvement in the intensive care unit setting and then describe the following steps for initiating or improving an interdisciplinary critical care quality improvement program: a) identify local motivation, support teamwork, and develop strong leadership; b) prioritize potential projects and choose the rst target; c) operationalize the measures, build support for the project, and develop a business plan; d) perform an environmental scan to better understand the problem, potential

barriers, opportunities, and resources for the project; e) create a data collection system that accurately measures baseline performance and future improvements; f) create a data reporting system that allows clinicians and others to understand the problem; g) introduce effective strategies to change clinician behavior. In addition, we identify four steps for evaluating and maintaining this program: a) determine whether the target is changing with periodic data collection; b) modify behavior change strategies to improve or sustain improvements; c) focus on interdisciplinary collaboration; and d) develop and sustain support from the hospital leadership. We also identify a number of online resources to complement this overview. Conclusions: This Society of Critical Care Medicine Task Force report provides an overview for clinicians interested in developing or improving a quality improvement program using a step-wise approach. Success depends not only on committed interdisciplinary work that is incremental and continuous but also on strong leadership. Further research is needed to rene the methods and identify the most cost-effective means of improving the quality of health care received by critically ill patients and their families. (Crit Care Med 2006; 34:211218) KEY WORDS: intensive care; critical care; quality improvement; interdisciplinary; quality of health care

any publications exist on the issue of quality improvement and outcome assessment (1, 2), and a growing number are specic to critical care (311). Although we recognize the value of these prior contributions, the volume of this literature can be over*See also p. 261. From the University of Washington, Seattle, Washington (RJC, RJW, PHM); McMaster University, Hamilton, Ontario, Canada (DJC); University of Pittsburgh, Pittsburgh, PA (DCA, SLK-G); University of Birmingham, Birmingham, UK (JB); Harvard Medical School, Boston, MA (RK); University of Wisconsin School of Nursing, Madison, WI (KTK); Rhode Island Hospital, Brown University, Providence, RI (ML); Unidade de Cuidsados Intensivos Polivalente, Hospital de St. Antnio dos

whelming to critical care clinicians. The objectives of this report are to summarize key concepts and outline a practical approach to develop, implement, evaluate, and sustain a quality improvement program in the intensive care unit (ICU). We also include patient safety as a component of quality (12). In addition, compleCapuchos, Centro Hospitalar de Lisboa (Zona Central), Lisbon, Portugal (RM); Johns Hopkins University, Baltimore, MD (PP); and University of California, San Francisco, San Francisco, CA (KP). Dr. Pronovost consulted for VHA and received grants from AHRQ. The remaining authors have no nancial interests to disclose. Copyright 2005 by the Society of Critical Care Medicine and Lippincott Williams & Wilkins DOI: 10.1097/01.CCM.0000190617.76104.AC

mentary resources are available on the Society of Critical Care Medicine (SCCM) Web site (13). To accomplish these objectives, the authors, as part of the SCCM Outcomes Task Force, met in person twice and by conference call twice to develop and write this document. We conducted a literature search on quality improvement and critical care or intensive care and searched online for additional resources to inform the process. The document was circulated electronically for multiple revisions from all authors.

UNDERSTANDING QUALITY IN HEALTH CARE


Quality of health care has been dened by the Institute of Medicine as care that is
211

Crit Care Med 2006 Vol. 34, No. 1

Table 1. Advantages and disadvantages of process and outcome measures Process Measure Do patients care about this? Do providers care about this? Obtain from routinely collected data? Interpretable for feedback and quality improvement? Directly measures prevention? Need for risk adjustment? Time needed for measurement? Sample size requirements? Less understandable to patients Yes; it relates directly to what providers are doing Usually Provides clear feedback about what providers are actually doing Yes No; however, need to clearly dene eligible patients Less Smaller Outcome Measure Yes; very important to patients Yes; however, providers are wary of confounding and may request risk-adjustment models Sometimes; additional data that are not routinely collected may be needed Difcult for providers to denitively know where to target efforts because outcomes are usually affected by several different processes No Yes; need different models for each outcome More (for risk-adjustment) Larger

safe, timely, effective, efcient, equitable, and patient-centered (14). Those leading quality improvement programs should understand the model developed by Donabedian (15, 16) including three classic quality-of-care components: structure, process, and outcome. Although these components are not necessarily mutually exclusive, the concepts provide a useful framework for understanding and improving the quality of healthcare. Structure represents the rst component of the quality of care model and can be dened as the way we organize care. Structurally, ICUs are heterogeneous, even within regions or countries. Sources of structural variation include how the ICU is integrated into the hospital or health care system, the size of the ICU, whether the unit is open or closed, the type and amount of technology available, and the number, roles, and responsibilities of ICU staff. Variation in these structural features can affect the quality of care and therefore the potential for recovery from critical illness. For example, studies have suggested that patients managed in a closed ICU by physicians with critical care training have better outcomes than patients managed in open ICUs by generalists without critical care training (17). In addition, technology that is inadequate for an ICUs case-mix can adversely affect outcome (18). Despite these and other studies, our knowledge of how structure affects ICU quality is immature but evolving. Process represents the second component of the quality of care model. Processes generally refer to what we do, or fail to do, for patients and their families. Delivering high-quality care in the ICU requires the synchronous efforts of large numbers of clinical and nonclinical processes. Just because data exist that show
212

improved outcomes with specic interventions, this does not guarantee that these ndings are translated effectively into clinical practice (19). In fact, nonclinical processes of the ICU, such as the process of organizational management, can have an important effect on quality (20, 21). Another important process-ofcare focus for quality initiatives is transfer of patients between the ICU and other parts of the hospital or between different clinicians within the ICU (22). Outcomes represent the third component of the quality of care model and refer to the results we achieve. Critical care clinicians and researchers have traditionally dedicated the most time to measuring and improving patient outcomes. In fact, critical care has led the way in developing risk-adjustment mortality models and standardized mortality ratios. Nonetheless, risk-adjusted measures have important limitations and cannot fully assess the quality of care in an individual institution or ICU (23). Other outcomes also determine ICU quality, including morbid events (e.g., nosocomial infections, venous thromboembolism, or serious adverse drug events) (24), organ dysfunction, health-related quality of life, and patient and family satisfaction with care. For these reasons, it may be suitable to think of the many qualities of care rather than a singular quality of care (8). Critical care clinicians interested in quality improvement should understand the structure-process-outcome model and select aspects they are both interested in and able to improve. Acknowledging that structure is the most challenging to change, clinicians may wish to target processes or outcomes instead. Table 1 describes the advantages and disadvantages of using processes vs. outcomes

when trying to improve quality of care. Outcome measures are intuitively important targets for clinicians, but they are often less responsive to improvement efforts and more prone to bias than process measures (8, 25, 26). This is partly because adverse outcomes occur less frequently than deciencies in their associated processes of care. In addition, processes are usually easier to measure and modify (27). Although many factors within health care systems affect outcomes, not all of these factors can be modied by clinicians. Nonetheless, a comprehensive ICU quality improvement program will usually address measures in each of these three categories and may also consider the structures, processes, and outcomes outside the ICU that affect the quality of care for critically ill patients and their families (4).

MEASURING QUALITY IN INTENSIVE CARE


A number of features dene a good quality measure (28 30). The measure must be important, valid, reliable, responsive, interpretable, and feasible. (The appendix contains a description of each of these features.) Although the critical care team members generally need not test the validity, reliability, and responsiveness of every quality measure they choose, they should ascertain that these attributes of the measure have been determined. However, the team must assess the overall importance of the candidate measure because importance may vary between different ICUs. The team should also consider the interpretability and feasibility of a measure before starting a project because these attributes may differ across ICUs based on factors such as the teams experience with quality imCrit Care Med 2006 Vol. 34, No. 1

Table 2. Key steps for initiating, improving, evaluating, and sustaining a quality improvement program Initiating or improving a quality improvement program 1. Do background work: Identify motivation, support team work and develop strong leadership. 2. Prioritize potential projects and choose the projects to begin. 3. Prepare for the project by operationalizing the measures, building support for the project, and developing a business plan. 4. Do an environmental scan to understand the current situation (structure, process, or outcome), the potential barriers, opportunities, and resources for the project. 5. Create a data collection system to provide accurate baseline data and document improvement. 6. Create a data reporting system that will allow clinicians and other stakeholders to see and understand the problem and the improvement. 7. Introduce strategies to change clinician behavior and create the change that will produce improvement. Evaluating and sustaining a quality improvement program 1. Determine whether the target is changing with ongoing observation, periodic data collection, and interpretation. 2. Modify behavior change strategies to improve, regain, or sustain improvements. 3. Focus on sustaining interdisciplinary leadership and collaboration for the quality improvement program. 4. Develop and sustain support from the hospital leadership.

provement (for interpretability) or the availability of computerized clinical information systems (for feasibility).

DEVELOPING OR IMPROVING AN ICU QUALITY IMPROVEMENT PROGRAM


Initiating a new quality improvement program or improving an existing program requires a series of steps to ensure that the program is successful. Table 2 outlines one approach to these steps, and each step is described next. Motivation, Teamwork, and Leadership.Quality improvement is an attitude and culture that should resonate through the entire ICU. As such, the foundation for a successful quality improvement program is strong motivation, teamwork, and leadership. Potential motivators for quality improvement programs are numerous. Motivators often derive from local expertise and interest of individuals on the ICU team. Family feedback or a patient-specic safety issue may also be the stimulus. An institution-wide quality of care initiative may incorporate all departments, thus involving the ICU. Hospital accreditation organizations often strategically target the ICU because of the severity of illness and complexity of care. Recently, professional societies such as the SCCM and regulatory agencies such as the Joint Commission on Accreditation of Healthcare Organizations (JCAHO) have promoted awareness of the importance of improving quality of care in the ICU. Whatever the initial impetus, successful quality improvement programs often require a change in organizational
Crit Care Med 2006 Vol. 34, No. 1

culture. A clear commitment is needed from all professionals involved. Quality improvement is not a oneperson or one-discipline task; it requires the shared commitment of the entire interdisciplinary ICU team. All voices need to be heard and respected since everyone has something to contribute. Those responsible should be properly trained; fortunately, numerous educational curricula and resources are available to develop appropriate skills (13). The successful quality improvement program often consists of a number of individual projects under common interdisciplinary leadership. Quality improvement is also a continuous journey rather than a discrete, time-limited project. Even though individual ICU clinicians may champion specic quality improvement projects, change is rarely achieved without strong leadership. Leadership is needed throughout the process, from the initial identication of a target to the evaluative phase. A successful leader needs to dedicate time and commitment for the program to succeed. Prioritize and Choose a Project. The rst step for initiating or improving a quality improvement program is to identify the opportunities and resources that might inuence the choice of where to start. The rst project should be feasible and likely to be successful so that the team can build on its successes. Initially, the team should avoid ambitious projects that consume resources and discourage team members. A number of potential quality measures can form the basis of a specic

quality improvement project. In Table 3, we classify potential quality measures according to structure, process, and outcome. Process measures linked to improved outcomes in randomized trials are indicated. Process measures that compose the ventilator bundle proposed by the Institute for Healthcare Improvement (IHI), JCAHO, and the Volunteer Hospitals of America are also indicated. Other organizations have also developed their own lists of potential ICU quality measures including JCAHO, (31) IHI, (32) and individual investigators (33). Links to these measures are available at the SCCM Web site (13). Suitable quality measures for the ICU may evolve as new research emerges. This is especially true for process measures. For example, medical emergency teams, specic nurse-to-patient ratios, or evening in-house intensivist coverage may eventually become structural quality measures if future studies support their effectiveness. Prepare for the Project. The rst step is careful preparation. For example, preparing for a project concerning improving venous thromboprophylaxis might include the following: Identify methods by which thromboprophylaxis is currently being measured, identify key stakeholders and their level of interest in the project, determine whether evidencebased guidelines are already available, and collect preliminary data about current thromboprophylaxis (and perhaps venous thromboembolism) rates. The initiation of a quality improvement project requires a project plan or business plan that includes a task list, budget considerations, and a timeline. The concept of a business plan is often intimidating to clinicians; however, it does not need to be extensive and can be helpful regardless of whether the quality improvement team is seeking additional funding for the project. The plan should outline the project for team members and hospital administrators. For example, a project plan for thromboprophylaxis might review the current burden of venous thromboembolism (the outcome), use of anticoagulant and mechanical thromboprophylaxis (the process), and institutional costs (personnel and nonpersonnel costs, complications that may increase length of stay, and resource utilization). The plan should also specify whether resources are required and, if so, whether they have been allocated (34). A well-written plan may be useful if re213

Table 3. Possible intensive care unit quality of care measures Structure measures Intensivist-lead rounding team Process measures DVT prophylaxisa,b Stress ulcer prophylaxisa,b Ventilator associated pneumonia prevention strategies HOB elevationa,b Heat and moisture exchangers & ltersa Central venous catheter bloodstream infection prevention strategies Hand hygiene Maximal barriers Chlorhexidinea Avoidance of femoral sitea Avoid routine replacementa Protocol-driven ventilator weaninga Targeted sedation protocols Daily sedation vacationb Daily assessment of extubation readinessb Severe sepsisa Early uid resuscitation Early antibiotics Corticosteroids for shock Activated protein C for shock Low tidal volume ventilation in ALI/ARDSa Noninvasive ventilation for hypercarbic respiratory failure Early enteral feedinga Appropriate transfusion thresholda Delayed transfer out of ICU Palliative care Symptom measurement & management at end of life Family conferences Directives regarding CPR, basic & advanced life support Outcome measures Unplanned extubation rate Ventilator-associated pneumonia rate CVC bloodstream infection rate Multiply resistant organism infection rate Serious adverse drug event rate Family satisfaction Unscheduled readmissions within 2448 hrs of ICU discharge Mortality (absolute and severity-adjusted) DVT, deep vein thrombosis; HOB, head of bed; ALI, acute lung injury; ARDS, acute respiratory distress syndrome; CPR, cardiopulmonary resuscitation; CVC, central venous catheter. a Process measures strongly linked to outcomes in randomized trials; bpart of the ventilator bundle proposed by the Institute for Healthcare Improvement (www.ihi.org). In this table, we outline possible quality of care measures, classied according to structure, process, and outcome variables.

questing additional resources for the project and can provide the basis for projecting expenditures and/or savings to the institution. Such documentation may also help obtain support from the hospital leadership. Do an Environmental Scan. Without preliminary information on current quality of care and the barriers to a quality improvement project, it is difcult to design and launch a successful project. Therefore, performing an environmental scan is an important step (6). An initial scan may involve available clinical or administrative databases. For example, pharmacy databases may be a useful starting place for assessing anticoagulant thromboprophylaxis. Surveys of reported practice patterns can be used to garner
214

impressions of interventions for which compliance is difcult to measure, such as antiembolic stockings (35). More direct methods of establishing baseline data are observational studies such as chart reviews. Finally, qualitative studies can be used to characterize behaviors that bear on quality improvement efforts and identify potential barriers to improvement. The environmental scan may also include a measure of organizational culture. Several tools are now available for assessing an ICUs quality culture (e.g., the SCCM ICU Index) or culture of safety, such as the Patient Safety Climate Survey (36) and the Safety Climate Scale (37). These instruments can highlight important issues that may need to be addressed

before, during, or after the project to maximize quality improvement. Create a Data Collection System. Once the environmental scan is complete, the quality improvement team will have information that will allow them to design an effective data collection system for baseline assessment. Without accurate baseline data, the team cannot document any improvements. Deciding what will be measured goes beyond generalities such as effective thromboprophylaxis. The target measure must be carefully dened using discrete, measurable components, and a specic improvement goal should be explicitly stated. The team should consider the following features. First, a unit of analysis, or denominator, of the measure needs to be chosen. Common denominators are dened in relation to a patient sample (e.g., per 100 patients) or standardized for patient exposure (e.g., per patient-day). For example, the latter might be chosen to express the median percent of ICU days of effective thromboprophylaxis. Second, the event or outcome of interest becomes the numerator of the measure and must also be dened. For example, effective thromboprophylaxis needs to be dened. Because anticoagulant thromboprophylaxis is more effective than mechanical prophylaxis, the primary quality measure might be the proportion of patient-days that patients received anticoagulant thromboprophylaxis, with mechanical approaches only counting in the numerator when patients are bleeding or at serious risk of bleeding. Another option is to record missed opportunities for thromboprophylaxis (i.e., proportion of patient-days that patients received neither anticoagulant nor mechanical approaches), which could be easier to measure and provide sufciently useful information. Organizations such as JCAHO and IHI are dening, operationalizing, and evaluating quality measures that can be used by the quality improvement team (31, 32). Third, are data collection methods already collected? If not, how easily can they be obtained? Will physician order sheets, pharmacy databases, the nursing databases, or nurse self-reports be used? Whenever possible, build measurement into daily workow and capitalize on existing data sources (38, 39). Regardless of the data source, perform a small-scale pilot before embarking on wide-scale measurement. Quality measures should be developed and implemented with the
Crit Care Med 2006 Vol. 34, No. 1

same rigor as conducting good clinical research, keeping feasibility in mind. Choosing when to collect data requires a balance between feasibility and precision. Frequent measurements may increase the precision of estimates but require more time and effort. Although reducing the frequency of measurements makes measurement more feasible, it may hide important variation in quality. Who will perform the measurement will vary across ICUs and depends on what is being measured and how. Busy clinicians may nd it difcult to engage in this aspect of quality improvement. Explicitly incorporating quality initiatives into the mission of an ICU and explicitly embedding responsibility for quality improvement into specic job descriptions will help. A potential predictor of success is the integration of project activities into clinicians usual workload. However, this alone is insufcient. Provision of educational materials, data collection methods training, reliability testing of key measures, and ongoing audit of data accuracy are also necessary for whoever is performing the measurement. Create a Data Reporting System. A successful quality project requires transparent and informative data reporting. The reason why data reporting is important is that most critical care clinicians are too busy to analyze and interpret data themselves. In the absence of timely and useful data reporting, interest wanes and projects lose momentum. On the other hand, interpretable and actionable data empower the ICU team, afrm that quality improvement efforts are making a difference, and increase the chances for sustainability. When deciding how data should be reported, consider the specic aims outlined during the planning phase, the background of the target audience, and local familiarity with existing data reports. Before releasing quality improvement results, it is useful to pilot presentation formats and solicit suggestions about design and interpretability from target audiences. Possible formats include text, tables, and gures; each has advantages and disadvantages. Text is a familiar vehicle for communication but may take more space and be less inviting to read. Tables display both descriptive and numerical variables, are easily assembled, and hold large amounts of information in a small space. However, tables are less useful for showing data over time. Graphs and gures (e.g., control
Crit Care Med 2006 Vol. 34, No. 1

charts, run charts, instrument panels, report cards) can visually display data over time but may require more expertise to create. Regardless of the chosen formats, data should be clearly labeled and simply displayed. The most meaningful formats show not only past but present performance (40). Determining when to report data depends partly on how often the target is actually measured. For process measures, monthly or even weekly reports may be more relevant, particularly if clinicians work in 1-wk blocks and feedback is being given about their week. As with data collection, deciding who will analyze and report the data depends on what is being measured and the available resources. The data analyst should be familiar with computational databases and have the relevant statistical expertise and clinical understanding to create valid summaries presented in a format that faithfully represents the results. Avoid mixed messages from different individuals reporting the data. It is also important that feedback be discussed face-to-face with clinicians. This means that a quality improvement leader or champion may need to have multiple meetings each month to ensure that the majority of ICU clinicians are aware of the progress of the project and have an opportunity to provide feedback. Introduce Strategies to Change Behavior. The foregoing steps are necessary, but not sufcient, to make a quality improvement project come to life; the next step is to implement behavior change strategies that are likely to produce the desired change (6). The Cochrane Effective Practice and Organization of Care Review Group has published a summary of 41 systematic reviews of hundreds of original studies testing the effects of different behavior change strategies on clinician behavior and patient outcomes (41). Behavior change strategies can be simple or complex and vary in effectiveness (42). For example, dissemination of mailed educational materials and conferences are least likely to change behavior. Audit and feedback of recent performance are the backbone of successful quality improvement initiatives but are insufcient by themselves (43). Informal discussions and formal presentations by local opinion leaders on the quality improvement team are crucial adjuncts to help change behavior, but reminders and prompts (such as preprinted orders) along with periodic interactive educa-

tional interventions are most useful for inducing and sustaining change. The most powerful behavior change strategies (and often the only strategies that are successful) are multifaceted rather than single approaches, are adapted to the local setting, and address documented barriers in the environment (6, 44). Selecting the behavior change strategies for each project depends not only on the effectiveness of the strategy but also on its feasibility, acceptability, and cost. For example, the proven effectiveness of computer decisions support systems in changing behavior (45) cannot be realized in an ICU without computerized clinical information systems or computerized order entry. It can be helpful to choose behavior change strategies by capitalizing on those that have worked previously; behavior change strategies useful for one project can often be used across several projects in a quality improvement program (46). Evaluating and Sustaining an ICU Quality Improvement Program. A key step in closing the loop on quality improvement initiatives is taking a scientic approach to evaluating whether the target measure is changing. In other words, the quality improvement program itself should be subjected to a quality improvement process. Without formal evaluation of a quality improvement program, it is impossible to judge whether it is successful and sustainable. After generating initial results, challenges may arise when trying to sustain the improvements. A study of factors associated with clinicians staying involved in quality improvement projects found the following predictors: continuous use of the same quality improvement model, taking courses in the science of quality improvement, and remaining employed in the same unit (47). This study provides important lessons on enhancing the sustainability of a quality improvement program and encourages a focus on consistency of efforts, staff training, and staff retention. Other issues that may be important include simple methods for data collection, transparent presentation of results, augmentation of strategies to change behavior, sustaining the energy of the quality improvement team and bedside clinicians, and continued interdisciplinary leadership and collaboration. Sustain Data Collection. Sustaining a quality improvement program requires ongoing reassessment of the methods being used to collect data. When a project
215

starts, the champions may have to manually collect data. Later, automated data collection methods may become available to obtain data from the electronic medical record or other electronic sources such as billing data. If such automation is possible, maintenance of the project will be greatly facilitated. If not, the team needs to ensure that sufcient resources are allocated to sustain the data collection. In the future, computerized clinical information, clinical decision support, and computer order entry systems will generate quality reports, thereby automating this aspect of data collection and reporting for selected process and outcome measures and making this step easier for ICU clinicians. Modify Behavior Change Strategies. Program evaluation may illustrate a need to modify the chosen behavior change strategy (46). For example, if clinicians initially receive weekly thromboprophylaxis reports, and satisfactory results are obtained, reporting frequency may decrease to monthly or quarterly. On the other hand, if thromboprophylaxis rates never reach target values, additional inservices or ongoing educational sessions may be necessary. If thromboprophylaxis rates decrease after initial improvement, new efforts such as preprinted orders may be necessary. Sustain Interdisciplinary Leadership and Collaboration in the ICU. A key aspect for sustaining a quality improvement program is to ensure ongoing interdisciplinary leadership. This leadership needs to maintain investment in all aspects of the process, ranging from ensuring the quality of data collection and its effective use to addressing problems. Since quality improvement programs are designed to improve quality, not to place blame on individuals, an environment of disclosure should be fostered by leadership so staff feel free to report events that affect quality. Staff must believe that they can report any problems or errors without fear of reprisal from leadership. A major barrier to any quality improvement initiative is the individuals or groups who believe they do not need to improve. They may not believe in the process, may feel threatened by it, or may have constructive ideas for how to improve the process that can be uncovered by engaging them. One strategy is to invite them to participate in the quality improvement process. Another way to convince these individuals is by using local baseline data to establish that a prob216

lem exists and, ideally, show that the project is correcting a problem (48). Networks can help sustain quality improvement programs and work well when ICUs within them have similar structures, processes, and targets for improvement. Drawing on the collective resources of a network can be particularly useful for small ICUs with limited resources to build and sustain a successful quality improvement program. An example is the Vermont-Oxford network for research and quality improvement in neonatal intensive care. This network builds on the IHI collaborative model and consists of hospitals that share data and resources to assist multidisciplinary teams in quality improvement (49). This network also illustrated the value of rapid-cycle improvement methodology to integrate evidence-based practices into neonatal ICUs (50). Dlugacz and colleagues (48) described the creation of a similar network among the ICUs of a multiple-hospital system that established data collection, provided feedback, and created a culture change. These efforts improved quality through dening levels of care in ICU, decreasing rates of unplanned extubation, and improving endof-life care. Sustain Support From Hospital Administration. Although quality improvement initiatives can accomplish much within the ICU, it is helpful if hospital administration supports the program (48). A key task for quality improvement leaders is to portray their program in terms that are meaningful to diverse stakeholders within and outside of the ICU (51). For clinicians, the most meaningful motivation is improving patient care, and tangible benets will help ensure they stay engaged. For program managers and division chiefs, the key aim may be improving program outcomes. For hospital administrators, it may be improving reputation in the region, based on improving outcomes and increasing market penetration for ICU care. Informal demonstrations of the change in culture can also be powerful. If executive rounds are a part of the institutions safety culture, these rounds can be linked to the quality improvement initiatives in the ICU (52). If the hospital has a quality improvement committee, relevant personnel from this committee or department can be integrated into each of the rapid-cycle improvement projects. Celebrate the successes of each improvement project with the ICU staff,

uccessful quality improvement programs require in-

terdisciplinary teamwork that is incremental and continuous.

but also include the program and hospital leadership in these celebrations. Encourage hospital administrators to use these inspiring stories with the hospital board and the public.

CONCLUSIONS
Successful quality improvement programs require interdisciplinary teamwork that is incremental and continuous. Each step is a discrete part of a project and each project can be considered as part of a program. Although quality improvement may seem overwhelming at rst, approaching a project in a step-wise manner as outlined here and beginning with a single, concrete project can help to ensure that quality improvement becomes routine and integral to the ICU. Quality improvement efforts require scientically sound performance measures. Just as in clinical research, sufcient resources must be allocated to ensure a robust data collection, analysis, and reporting system. Leadership is crucial to the success of both the overall program and each project within it. Individual quality improvement projects and the entire quality improvement program should learn from its successes as well as failures. Further research is needed to rene the methods and identify the most costeffective means of improving the quality of health care received by critically ill patients and their families.

REFERENCES
1. Donabedian A: The seven pillars of quality. Arch Pathol Lab Med 1990; 114:11151118 2. Lomas J: Quality assurance and effectiveness in health care: An overview. Qual Assur Health Care 1990; 2:512 3. Frutiger A, Moreno R, Thijs L, et al: A clinicians guide to the use of quality terminology. Working Group on Quality Improvement of the European Society of Intensive

Crit Care Med 2006 Vol. 34, No. 1

4.

5. 6.

7.

8.

9.

10. 11. 12.

13.

14.

15.

16.

17.

18.

19.

20.

21.

Care Medicine. Intensive Care Med 1998; 24: 860 863 Angus DC, Black N: Improving care of the critically ill: Institutional and health-care system approaches. Lancet 2004; 363: 1314 1320 Bion JF, Heffner JE: Challenges in the care of the acutely ill. Lancet 2004; 363:970 977 Cook DJ, Montori VM, McMullin JP, et al: Improving patients safety locally: Changing clinician behaviour. Lancet 2004; 363: 1224 1230 Lilford R, Mohammed MA, Spiegelhalter D, et al: Use and misuse of process and outcome data in managing performance of acute medical care: Avoiding institutional stigma. Lancet 2004; 363:11471154 Pronovost PJ, Nolan T, Zeger S, et al: How can clinicians measure safety and quality in acute care? Lancet 2004; 363:10611067 Rubenfeld GD, Angus DC, Pinsky MR, et al: Outcomes research in critical care: Results of the American Thoracic Society Critical Care Assembly Workshop on Outcomes Research. The Members of the Outcomes Research Workshop. Am J Respir Crit Care Med 1999; 160:358 367 Garland A: Improving the ICU: Part 2. Chest 2005; 127:21652179 Garland A: Improving the ICU: Part 1. Chest 2005; 127:21512164 Leape LL, Berwick DM: Five years after To Err Is Human: What have we learned? JAMA 2005; 293:2384 2390 ICU Quality Improvement: Professional Resources for Quality Improvement and Quality Corner. Society of Critical Care Medicine, 2005; Available at: http://www.sccm.org. Accessed October 31, 2005 Committee on Quality of Health Care in America. Crossing the Quality Chasm: A New Health System for the 21st Century. Washington, DC: National Academy Press, 2001 Donabedian A: Continuity and change in the quest for quality. Clin Perform Qual Health Care 1993; 1:9 16 Donabedian A: Aspects of Medical Care Administration: Specifying Requirement for Health Care. Cambridge, MA, Harvard University Press, 1973 Pronovost PJ, Angus DC, Dorman T, et al: Physician stafng patterns and clinical outcomes in critically ill patients: A systematic review. JAMA 2002; 288:21512162 Bastos PG, Knaus WA, Zimmerman JE, et al: The importance of technology for achieving superior outcomes from intensive care. Brazil APACHE III Study Group. Intensive Care Med 1996; 22:664 669 Kalassian KG, Dremsizov T, Angus DC: Translating research evidence into clinical practice: New challenges for critical care. Crit Care 2002; 6:1114 Pronovost PJ, Jenckes MW, Dorman T, et al: Organizational characteristics of intensive care units related to outcomes of abdominal aortic surgery. JAMA 1999; 281:1310 1317 Reis Miranda D, Ryan DW, Schaufeli WB, et

22. 23.

24.

25.

26.

27.

28.

29.

30. 31.

32.

33.

34.

35.

36. 37.

38.

39.

al (Eds). Organization and Management of Intensive Care: a Prospective Study in 12 European Countries. Berlin and Heidelberg, Springer-Verlag, 1997 Carlet J: Quality assessment of intensive care units. Curr Opin Crit Care 1996; 2:319 325 Werner RM, Asch DA: The unintended consequences of publicly reporting quality information. JAMA 2005; 293:1239 1244 JCAHO Core Measures; Available at: http:// www.jcaho.org/pms/coremeasures/index. htm. Accessed October 31, 2005 Rubin HR, Pronovost P, Diette GB: From a process of care to a measure: The development and testing of a quality indicator. Int J Qual Health Care 2001; 13:489 496 Rubin HR, Pronovost P, Diette GB: The advantages and disadvantages of process-based measures of health care quality. Int J Qual Health Care 2001; 13:469 474 Brook RH, McGlynn EA, Cleary PD: Quality of health care. Part 2: Measuring quality of care. N Engl J Med 1996; 335:966 970 McGlynn EA: Selecting common measures of quality and system performance. Med Care 2003; 41:I39 I47 McGlynn EA: Introduction and overview of the conceptual framework for a national quality measurement and reporting system. Med Care 2003; 41:I1I7 Flowers J, Hall P, Pencheon D: Public health indicators. Public Health 2005; 119:239 245 Candidate Core Measure Sets. Available at: http://www.jcaho.org/pms/coremeasures/ candidate core measure sets.htm. Accessed May 18, 2005. Accessed October 31, 2005 Critical Care. Available at: http://www.ihi.org/ IHI/Topics/CriticalCare/. Accessed May 18, 2005 Pronovost PJ, Berenholtz SM, Ngo K, et al: Developing and pilot testing quality indicators in the intensive care unit. J Crit Care 2003; 18:145155 Docimo AB, Pronovost PJ, Davis RO, et al: Using the online and ofine change model to improve efciency for fast-track patients in an emergency department. Jt Comm J Qual Improv 2000; 26:503514 Limpus A, Chaboyer W: The use of graduated compression stockings in Australian intensive care units: A national audit. Aust Critl Care 2003; 16:5358 Safety Climate Survey. Available at: http:// www.ihi.org. Accessed November 31, 2005 Pronovost PJ, Weast B, Holzmueller CG, et al: Evaluation of the culture of safety: Survey of clinicians and managers in an academic medical center. Qual Saf Health Care 2003; 12:405 410 Nelson EC, Splaine ME, Batalden PB, et al: Building measurement and data collection into medical practice. Ann Intern Med 1998; 128:460 466 Nelson EC, Splaine ME, Plume SK, et al: Good measurement for good improvement work. Qual Manag Health Care 2004; 13: 116

40. Dodek PM, Heyland DK, Rocker GM, et al: Translating family satisfaction data into quality improvement. Crit Care Med 2004; 32:19221927 41. Grimshaw JM, Shirran L, Thomas R, et al: Changing provider behavior: An overview of systematic reviews of interventions. Med Care 2001; 39:II2II45 42. Cabana MD, Rand CS, Powe NR, et al: Why dont physicians follow clinical practice guidelines? A framework for improvement. JAMA 1999; 282:1458 1465 43. Thomson OBrien MA, Oxman AD, Davis DA, et al: Audit and feedback: effects on professional practice and health care outcomes. Cochrane Database Syst Rev 2000; CD000259 44. Grol R: Improving the quality of medical care: Building bridges among professional pride, payer prot, and patient satisfaction. JAMA 2001; 286:2578 2585 45. Garg AX, Adhikari NK, McDonald H, et al: Effects of computerized clinical decision support systems on practitioner performance and patient outcomes: A systematic review. JAMA 2005; 293:12231238 46. McMullin J, Landry F, McDonald E, et al: Changing behavior in the ICU by optimizing thromboprophylaxis. Abstr. Am J Respir Crit Care Med 2003;167:A250 47. Wallin L, Bostrom AM, Harvey G, et al: Progress of unit based quality improvement: An evaluation of a support strategy. Qual Saf Health Care 2002; 11:308 314 48. Dlugacz YD, Stier L, Lustbader D, et al: Expanding a performance improvement initiative in critical care from hospital to system. Jt Comm J Qual Improv 2002; 28:419 434 49. Plsek PE: Quality improvement methods in clinical medicine. Pediatrics 1999; 103: 203214 50. Jackson JK, Vellucci J, Johnson P, et al: Evidence-based approach to change in clinical practice: Introduction of expanded nasal continuous positive airway pressure use in an intensive care nursery. Pediatrics 2003; 111: e542 e547 51. Ferlie EB, Shortell SM: Improving the quality of health care in the United Kingdom and the United States: A framework for change. Milbank Q 2001; 79:281315 52. Pronovost PJ, Weast B, Bishop K, et al: Senior executive adopt-a-work unit: A model for safety improvement. Jt Comm J Qual Saf 2004; 30:59 68

APPENDIX: FEATURES DEFINING A ICU GOOD QUALITY MEASURE


A good ICU quality measure should be important, valid, reliable, responsive, interpretable, and feasible. Each of these characteristics is briey described next with a focus on their relevance for the ICU quality improvement team.
217

Crit Care Med 2006 Vol. 34, No. 1

An important measure for quality improvement programs should generally be high prevalence outcomes or outcomes associated with considerable morbidity and mortality. For a structure or process measures to be important, it must be strongly linked to clinically important outcomes. In addition, various parties may consider the measure more or less important, depending on their perspective. Measures that are important for the individual patient and family may differ from the measures important to the ICU manager, hospital executive, or community. Each perspective should be viewed as complementary instead of competitive, and the quality improvement team should take each into consideration. A valid measure refers to the extent to which a measure reects what it is supposed to measure. Validation may include comparing the measure to other mea-

sures such as a gold standard (criterion validity) or to other measures or constructs that should give similar results (construct validity). Generally, the ICU quality improvement team will use measures that have already been shown to be valid. A reliable measure refers to the extent to which a measure yields the same result when assessed by a different rater (interrater reliability) or the extent to which repeated measurement provides the same result when the factor being measured hasnt changed (intrarater reliability). Generally, the ICU quality improvement team will use measures that have already been shown to be reliable. A responsive measure refers to the extent to which the measure is sensitive to change introduced by the quality improvement process. An important component of a responsive measure is that there

is room for improvement in the measure and that the measure is capable of detecting that improvement. There should be a gap between current performance and desired performance that the measure can identify. The ICU quality improvement team should determine that others have found the measure to be responsive and also there is room for improvement within their individual ICU. An interpretable measure is easily understood by the target audience including critical care clinicians, ICU management, and hospital leadership. A feasible measure is useful because it is relatively easy to obtain and can be collected with available resources. Feasibility will vary depending on the resources that are available and should be assessed for every measure before implementing a quality improvement project.

218

Crit Care Med 2006 Vol. 34, No. 1

You might also like