Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Systematic Reviews of The Effectiveness of Quality Improvement Strategies and Programmes

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

298

QUALITY IMPROVEMENT RESEARCH

Systematic reviews of the effectiveness of quality


improvement strategies and programmes
J Grimshaw, L M McAuley, L A Bero, R Grilli, A D Oxman, C Ramsay, L Vale,
M Zwarenstein
.............................................................................................................................

Qual Saf Health Care 2003;12:298–303

Systematic reviews provide the best evidence on the


effectiveness of healthcare interventions including that, in comparison with traditional narrative
reviews, systematic reviews are an efficient scien-
quality improvement strategies. The methods of tific approach to identify and summarise evidence
systematic review of individual patient randomised trials on the effectiveness of interventions that allow
of healthcare interventions are well developed. We the generalisability and consistency of research
findings to be assessed and data inconsistencies
discuss methodological and practice issues that need to to be explored. Furthermore, the explicit methods
be considered when undertaking systematic reviews of used in systematic reviews should limit bias and
quality improvement strategies including developing a improve the reliability and accuracy of conclu-
sions. In this paper we focus on the methods of
review protocol, identifying and screening evidence systematic reviews of the effectiveness of quality
sources, quality assessment and data abstraction, improvement strategies and programmes, build-
analytical methods, reporting systematic reviews, and ing on our experiences within the Cochrane
Effective Practice and Organisation of Care
appraising systematic reviews. This paper builds on our (EPOC) review group (box 2).7–9 (For a more gen-
experiences within the Cochrane Effective Practice and eral discussion about the conduct of systematic
Organisation of Care (EPOC) review group. reviews see the Cochrane Handbook,10 Egger and
colleagues11 and Cooper and Hedges.12)
..........................................................................
FORMING A REVIEW TEAM

S
ystematic reviews are “reviews of a clearly When preparing to undertake a systematic review
formulated question that use explicit meth- of a quality improvement strategy it is important
ods to identify, select, and critically appraise to assemble a review team with the necessary
relevant research and to collect and analyse data combination of content and technical expertise.
from the studies that are included in the Content expertise may come from consumers,
review”.1 Well conducted systematic reviews are healthcare professionals, and policy makers. Con-
increasingly seen as providing the best evidence tent expertise is necessary to ensure that the
to guide choice of quality improvement strategies review question is sensible and addresses the
in health care.2–4 Furthermore, systematic reviews concerns of key stakeholders and to aid interpret-
should be an integral part to the planning of ation of the review. Frequently, content experts
future quality improvement research to ensure may not have adequate technical expertise and
that the proposed research is informed by all rel- require additional support during the conduct of
evant current research and that the research reviews. Technical expertise is required to develop
questions have not already been answered. search strategies for major databases, hand search
Systematic reviews are a generic methodology
that can be used to synthesise evidence from a
broad range of methods addressing different Box 2 The Cochrane Effective Practice and
types of questions (box 1). Mulrow6 suggested Organisation of Care (EPOC) Group

The Cochrane Effective Practice and Organis-


Box 1 Steps involved in undertaking a ation of Care (EPOC) group undertakes
systematic review systematic reviews of the effectiveness of
professional, organisational, financial, and
• Stating the objectives of the research. regulatory interventions to improve profes-
See end of article for • Defining eligibility criteria for studies to be
authors’ affiliations sional practice and the delivery of effective
included.
....................... health services.6–8 It was established in 1994
• Identifying (all) potentially eligible studies.
• Applying eligibility criteria. and since then has worked with over 180
Correspondence to:
Dr J Grimshaw, Director of • Assembling the most complete data set reviewers worldwide to produce 29 reviews
the Clinical Epidemiology feasible and 22 protocols covering a diverse range of
Programme, Ottawa Health • Analysing this data set, using statistical topics including the effectiveness of different
Research Institute, synthesis and sensitivity analyses, if appropri- continuing medical education strategies,
1053 Carling Avenue, ate and possible.
Ottawa ON K1Y 4E9,
changes in the setting of care and different
Canada;
• Preparing a structured report of the research. remuneration systems for primary care
jgrimshaw@ohri.ca From Chalmers.5 physicians.
.......................

www.qshc.com
Systematic reviews 299

key journals (when appropriate), screen search results,


Box 3 Examples from the EPOC taxonomy of
develop data abstraction forms, appraise quality of primary
studies, and statistically pool data (when appropriate). professional quality improvement strategies16

• Distribution of educational materials: published or printed


DEVELOPING A PROTOCOL FOR A SYSTEMATIC recommendations for clinical care including clinical
REVIEW practice guidelines, delivered personally or through mass
Before undertaking a systematic review it is important to mailings.
develop a formal protocol detailing the background, objec- • Educational meetings: healthcare providers who have par-
tives, inclusion criteria, search methods, and proposed ticipated in conferences, lectures, workshops or trainee-
analytical methods to be used in the review. If reviewers do ships.
not develop a protocol a priori, there is a danger that the • Local consensus processes: inclusion of participating
providers in discussion to ensure that they agreed that the
results of the review may be influenced by the data. For exam-
chosen clinical problem was important and the approach to
ple, reviewers may exclude studies with unexpected or unde- managing the problem was appropriate.
sirable results.13 Developing and following a detailed protocol • Educational outreach visits and academic detailing: use of
protects against this potential bias. Examples of protocols for a trained person who met with providers in their practice
reviews of quality improvement strategies are available in The settings to give information with the intent of changing the
Cochrane Library and from the EPOC website.8 9 provider’s practice. The information given may have
included feedback on the performance of the provider(s).
INCLUSION CRITERIA FOR SYSTEMATIC REVIEWS • Local opinion leaders: use of providers nominated by their
colleagues as “educationally influential”. The investigators
Reviewers need to develop the review question based upon
must have explicitly stated that their colleagues identified
consideration of the types of study (for example, randomised the opinion leaders.
controlled trials), interventions (for example, audit and feed- • Patient mediated interventions: new clinical information (not
back), study populations (for example, physicians), and previously available) collected directly from patients and
outcomes (for example, objective measures of provider behav- given to the provider e.g. depression scores from an instru-
iour) in which they are interested. In general it is better to ment.
choose an estimation approach rather than a hypothesis test- • Audit and feedback: any summary of clinical performance
ing approach in systematic reviews of quality improvement of health care over a specified period of time. The summary
strategies as decision makers want to know something about may also have included recommendations for clinical
the size of the expected effects (and the uncertainty around action. The information may have been obtained from
medical records, computerised databases, or observations
those estimates), and not just whether the null hypothesis can from patients.
be rejected or not. Moreover, focusing on hypothesis testing • Reminders: patient or encounter specific information,
tends to focus attention on p values rather than effects. provided verbally, on paper or on a computer screen,
It is often helpful for reviewers to attempt to frame their which is designed or intended to prompt a health
research question in terms of the effects of quality improve- professional to recall information, including computer
ment strategy x on end point y in study population z. In addi- aided decision support and drug dosages are included.
tion, reviewers should attempt to define a priori any subgroup • Marketing: a survey of targeted providers to identify barri-
analyses they wish to undertake to explore effect modifiers ers to change and subsequent design of an intervention that
(for example, characteristics of the intervention) or other addresses identified barriers.
sources of heterogeneity (for example, quality of the included
studies).
resulting definitions are likely to be robust and meaningful.
Design considerations EPOC has developed a taxonomy for quality interventions
While cluster randomised trials are the most robust design for based on such descriptions that may provide a useful starting
quality improvement strategies,13 some strategies may not be point for such discussions (see box 3 for examples).
amenable to randomisation—for example, mass media cam-
paigns. Under these circumstances, reviewers may choose to The lumping versus splitting debate
include other designs including quasi experimental designs.14 A key issue faced by reviewers of quality improvement
If a review includes quasi experimental studies—for example, strategies is deciding how broad the scope of a review should
interrupted time series designs for evaluating mass media be; this is commonly know as the “lumping” or “splitting”
campaigns,15 the reviewers need to recognise the weaknesses debate.17 For example, a review team could choose to
of such designs and be cautious of overinterpreting the results undertake a review of quality improvement interventions to
of such studies. Within EPOC, reviewers can include improve chronic diseases across all healthcare settings and
randomised trials, controlled before and after studies, and professionals, or a review of quality improvement interven-
interrupted time series.8 tions to improve chronic diseases within primary care, or a
review of quality improvement strategies to improve diabetes
Intervention considerations care within primary care, or a review of audit and feedback to
Another important issue faced by reviewers is the lack of gen- improve all aspects of care across all healthcare settings. The
erally accepted classification of quality improvement rationale for taking a broad approach (“lumping”) is that,
strategies; as a result, it is vital that reviewers clearly define because systematic reviews aim to identify the common gen-
the intervention of interest. In our experience it is easier to eralisable features within similar interventions, minor differ-
define interventions based on pragmatic descriptions of the ences in study characteristics may not be crucially important.
components of an intervention—for example, interactive edu- The rationale for taking a narrower approach (“splitting”) is
cational sessions—than theoretical constructs—for example, that it is only appropriate to include studies which are very
problem based learning—as the description of interventions similar in design, study population, intervention characteris-
in primary studies is commonly poorly reported, especially tics, and outcome recording.
lacking details of the rationale or theoretical basis for an There are good methodological reasons for taking a broad
intervention. Developing the definition of an intervention that approach. Broad systematic reviews allow the generalisability
can be operationalised within a systematic review frequently and consistency of research findings to be assessed across a
requires several iterations, preferably with involvement of wider range of different settings, study populations, and
content experts outside the review team to ensure that the behaviours. This reduces the risk of bias or chance results. For

www.qshc.com
300 Grimshaw, McAuley, Bero, et al

example, Jamtvedt and colleagues undertook a review of audit dence about sources of bias in individual patient randomised
and feedback to improve all aspects of care across all trials of healthcare interventions,22 quality criteria for cluster
healthcare settings.18 They identified 85 studies of which 18 randomised trials and quasi experimental are less developed.
considered the effects of audit and feedback on chronic EPOC has developed quality appraisal criteria for such studies
disease management, 14 considered the effects of audit and based upon threats to validity of such studies identified by
feedback on chronic disease management in primary care, and Cook and Campbell23 (available from EPOC website).9 Review-
three considered the effects of audit and feedback on diabetes ers should develop a data abstraction checklist to ensure a
care in primary care settings. By undertaking a broad review common approach is applied across all studies. Box 4 provides
they were able to explore whether the effects of audit and examples of data abstraction checklist items that reviewers
feedback were similar across different types of behaviour, dif- may wish to collect. Data abstraction should preferably be
ferent settings, and different types of behaviour within differ- undertaken independently by two reviewers. The review team
ent settings. If they had undertaken a narrow review of audit should identify the methods that will be used to resolve dis-
and feedback on diabetes care in primary care they would agreements.
have been limited to considering only three studies and may
have made erroneous conclusions if these studies suffered PREPARING FOR DATA ANALYSIS
from bias or chance results. Very narrowly focused reviews are, The methodological quality of primary studies of quality
in effect, subgroup analyses and suffer all the well recognised improvement strategies is often poor. Reviewers frequently
potential hazards of such analyses.19 A more transparent need to make decisions about which outcomes to include
approach is to lump together all similar interventions and within data analyses and may need to undertake re-analysis of
then to carry out explicit a priori subgroup analyses. some studies. In this section we highlight methods for
addressing two common problems encountered in systematic
IDENTIFYING AND SCREENING EVIDENCE reviews of quality improvement strategies—namely, reporting
SOURCES of multiple end points and handling unit of analysis errors in
Reviewers need to identify what bibliographic databases and cluster randomised studies.
other sources the review team will search to identify
potentially relevant studies and the proposed search strategies Reporting multiple outcomes
for the different databases. There are a wide range of Commonly, quality improvement studies report multiple end
bibliographic databases available—for example, Medline, points, for example—changes in practice for 10 different pre-
EMBASE, Cinahl, Psychlit, ERIC, SIGLE. The review team has ventive services or diagnostic tests. While reviewers may
to make a judgement about what databases are most relevant choose to report all end points, this is problematic both for the
to the review question and can be searched within the analysis and for readers who may be overwhelmed with data.
resources available to them. The review team should decide which end points it will report
The review team has to develop sensitive search strategies and include in the analysis. For example, a review team could
for potentially relevant studies. Unfortunately, quality im- choose to use the main end points specified by the investiga-
provement strategies are poorly indexed within bibliographic tors when this is done, and the median end point when the
databases; as a result, broad search strategies using free text main end points are not specified.21
and allied MeSH headings often need to be used. Further-
more, while optimal search strategies have been developed for Handling unit of analysis errors in primary studies
identifying randomised controlled trials,20 efficient search Many cluster randomised trials have potential unit of analysis
strategies have not been developed for quasi experimental errors; practitioners or healthcare organisations are ran-
designs. Review teams should include or consult with experi- domised but during the statistical analyses the individual
enced information scientists to provide technical expertise in patient data are analysed as if there was no clustering within
this area. practitioner or healthcare organisation.14 24 In a recent system-
EPOC has developed a highly sensitive search strategy atic review of guideline dissemination and implementation
(available at http://www.epoc.uottawa.ca/register.htm) for strategies over 50% of included cluster randomised trials had
studies within its scope, and has searched Medline, EMBASE,
Cinahl and SIGLE retrospectively and prospectively.21 We have
screened over 200 000 titles and abstracts retrieved by our Box 4 Examples of data abstraction checklist items
searches of these databases to identify potentially relevant
studies. These are entered onto a database (“pending”) await- • Inclusion criteria
ing further assessment of the full text of the paper. Studies • Type of targeted behaviour
which, after this assessment, we believe to be within our scope • Participants
are then entered onto our database (the “specialised register”) • Characteristics of participating providers
with hard copies kept in our editorial base. We currently have • Characteristics of participating patients
approximately 2500 studies in our specialised register (with a • Study setting
• Location of care
further 3000 potentially relevant studies currently being
• Country
assessed). In future, reviewers may wish to consider the EPOC
• Study methods
specialised register as their main bibliographic source for
• Unit of allocation/analysis
reviews and only undertake additional searches if the scope of
• Quality criteria
their review is not within EPOC’s scope (see EPOC website for
• Prospective identification by investigators of barriers to
further information about the register).9 change
Preferably two reviewers should independently screen the • Type and characteristics of interventions
results of searches and assess potentially relevant studies • Nature of desired change
against the inclusion criteria in the protocol. The reasons for • Format/sources/recipient/method of delivery/timing
excluding potentially relevant studies should be noted when • Type of control intervention (if any)
the review is reported. • Outcomes
• Description of the main outcome measure(s)
QUALITY ASSESSMENT AND DATA ABSTRACTION • Results
Studies meeting the inclusion criteria should be assessed Derived from EPOC data abstraction checklist.8 16
against quality criteria. While there is growing empirical evi-

www.qshc.com
Systematic reviews 301

such unit of analysis errors.21 Potential unit of analysis errors unit of analysis errors need to be excluded because of the
result in artificially low p values and overly narrow confidence uncertainty about their statistical significance and underpow-
intervals.25 It is possible to re-analyse the results of cluster ered comparisons observing clinically significant but statisti-
randomised trials if a study reports event rates for each of the cally insignificant effects would be counted as “no effect com-
clusters in the intervention and control groups using a t test, parisons”.
or if a study reports data on the extent of statistical To overcome some of these problems, we have been explor-
clustering.25 26 In our experience it is rare for studies with unit ing more explicit analytical approaches reporting:
of analysis errors to report sufficient data to allow re-analysis. • the number of comparisons showing a positive direction of
The point estimate is not affected by unit of analysis errors, so effect;
it is possible to consider the size of the effects reported in these
studies even though the statistical significance of the results • the median effect size across all comparisons;
cannot be ascertained (see Donner and Klar27 for further dis- • the median effect size across comparisons without unit of
cussion on systematic reviews of clustered data and Grimshaw analysis errors; and
and colleagues20 and Ramsay and colleagues28 for further dis- • the number of comparisons showing statistically significant
cussion of other common methodological problems in primary effects.21
studies of quality improvement strategies). This allows the reader to assess the likely effect size and
consistency of effects across all included studies and whether
METHODS OF ANALYSIS/SYNTHESIS these effects differ between studies, with and without unit of
Meta-analysis analysis errors. By using these more explicit methods we are
When undertaking systematic reviews it is often possible to able to include information from all studies, but do not have
undertake meta-analyses that use “statistical techniques the same statistical certainty of the effects as we would using
within a systematic review to integrate the results of a vote counting approach. An example of the impact of this
individual studies”.1 Meta-analyses combine data from multi- approach is shown in box 5.
ple studies and summarise all the reviewed evidence by a sin-
gle statistic, typically a pooled relative risk of an adverse out- Exploring heterogeneity
come with confidence intervals. Meta-analysis assumes that When faced with heterogeneity in both quantitative and
different studies addressing the same issue will tend to have qualitative systematic reviews, it is important to explore the
findings in the same direction.29 In other words, the real effect potential causes of this in a narrative and statistical manner
of an intervention may vary in magnitude but will be in the (where appropriate).32 Ideally, the review team should have
same direction. Systematic reviews of quality improvement identified potential effect modifiers a priori within the review
strategies typically include studies that exhibit greater protocol. It is possible to explore heterogeneity using tables,
variability or heterogeneity of estimates of effectiveness of such bubble plots, and whisker plots (displaying medians, inter-
interventions due to differences in how interventions were quartile ranges, and ranges) to compare the size of the
operationalised, targeted behaviours, targeted professionals, observed effects in relationship to each of these modifying
and study contexts. As a result, the real effect on an interven- variables.18 Meta-regression is a multivariate statistical tech-
tion may vary both in magnitude and direction, depending on nique that can be used to examine how the observed effect
the modifying effect of such factors. Under these circum- sizes are related to potential explanatory variables. However,
stances, meta-analysis may result in an artificial result which the small number of included studies common in systematic
is potentially misleading and of limited value to decision review of quality improvement strategies may lead to overfit-
makers. Further reports of primary studies frequently have ting and spurious claims of association. Furthermore, it is
common methodological problems—for example, unit of important to recognise that these associations are observa-
analysis errors—or do not report data necessary for meta- tional and may be confounded by other factors.33 As a result,
analysis. Given these considerations, many existing reviews of
quality improvement strategies have used qualitative synthe-
sis methods rather than meta-analysis. Box 5 Impact of using an explicit analytical approach
Although deriving an average effect across a heterogeneous
group of studies is unlikely to be helpful, quantitative analyses Freemantle et al31 used a vote counting approach in a
can be useful for describing the range and distribution of review of the effects of disseminating printed educational
effects across studies and to explore probable explanations for materials. None of the studies using appropriate statistical
the variation that is found. Generally, a combination of quan- analyses found statistically significant improvements in
titative analysis, including visual analyses, and qualitative practice. The authors concluded: “This approach has led
analysis should be used. researchers and quality improvement professionals to dis-
count printed educational materials as possible interven-
Qualitative synthesis methods tions to improve care”.
Previous qualitative systematic reviews of quality improve- In contrast, Grimshaw et al21 used an explicit analytical
ment strategies have largely used vote counting methods that approach in a review of the effects of guideline dissemina-
add up the number of positive and negative comparisons and tion and implementation strategies. Across four cluster ran-
conclude whether the interventions were effective on this domised controlled trials they observed a median absolute
basis.2 30 Vote counting can count either the number of improvement of +8.1% (range +3.6% to +17%) compli-
comparisons with a positive direction of effect (irrespective of ance with guidelines. Two studies had potential unit of
statistical significance) or the number of comparisons with analyses, the remaining two studies observed no
statistically significant effects. These approaches suffer from a statistically significant effects. They concluded: “These
number of weaknesses. Vote counting comparisons with a results suggest that educational materials may have a
positive direction fail to provide an estimate of the effect size modest effect on guideline implementation . . . However
of an intervention (giving equal weight to comparisons that the evidence base is sparse and of poor quality”. This
show a 1% change or a 50% change) and ignore the precision approach, by capturing more information, led to the
of the estimate from the primary comparisons (giving equal recognition that printed educational materials may result in
weight to comparisons with 100 or 1000 participants). Vote modest but important improvements in care and required
counting comparisons with statistically significant effects suf- further evaluation.
fer similar problems; in addition, comparisons with potential

www.qshc.com
302 Grimshaw, McAuley, Bero, et al

such analyses should be seen as exploratory. Graphical


presentation of such analyses often facilitates understanding Key messages
as it allows several levels of information to be conveyed
• Systematic reviews provide the best evidence on the effec-
concurrently (fig 1). tiveness of healthcare interventions including quality
improvement strategies.
APPRAISING SYSTEMATIC REVIEWS OF QUALITY • Systematic reviews allow the generalisability and consist-
IMPROVEMENT STRATEGIES ency of research findings to be assessed and data
Systematic reviews of quality improvement strategies are of inconsistencies to be explored across studies.
• The conduct of systematic reviews requires content and
varying quality and potential users of such reviews should
technical expertise.
appraise their quality carefully. Fortunately, Oxman and • The Cochrane Effective Practice and Organisation of Care
colleagues have developed and validated a checklist for (EPOC) group has developed methods and tools to support
appraising systematic reviews including nine criteria scored as reviews of quality improvement strategies.
“done”, “partially done”, and “not done”, and one summary
criterion scored on a 1–7 scale where 1 indicates “major risk of
bias” and 7 indicates “minor risk of bias” and one summary
criterion (box 6).35 36 Grimshaw and colleagues used this scale
to appraise 41 systematic reviews of quality improvement
1.7 strategies published by 1998; the median summary quality
score was 4, indicating that they had some methodological
1.5 flaws.3 Common methodological problems within these
1.3
reviews included failure to adequately report inclusion
criteria, to avoid bias in the selection of studies, to report cri-
1.1 teria to assess the validity of included studies; and failure to
apply criteria to assess the validity of selected studies. Unit of
Adjusted RD

0.9
analysis errors were rarely addressed in these reviews.
0.7

0.5 CONCLUSION
Systematic reviews are increasingly recognised as the best
0.3
evidence source on the effectiveness of different quality
0.1 improvement strategies. In this paper we have discussed
_
issues that reviewers face when conducting reviews of quality
0.1
improvement strategies based on our experiences within the
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
Cochrane Effective Practice and Organisation of Care group.
Baseline non-compliance
The main limitation of current systematic reviews (and the
Figure 1 Graphical presentation of results using a bubble plot. This main challenge confronting reviewers) is the quality of evalu-
bubble plot, from a review on the effects of audit and feedback,18 ations of quality improvement strategies. Fortunately, well
shows the relationship between the adjusted risk difference and done systematic reviews provide guidance for future studies.
baseline non-compliance. The adjusted risk difference represents the
Indeed, at present the main contribution of systematic reviews
difference in non-compliance before the intervention from the differ-
ence observed after the intervention. Each bubble represents a study, in this area may be to highlight the need for more rigorous
and the size of the bubble reflects the number of healthcare provid- evaluations, but there are indications that the quality of
ers in the study. The regression line shows a trend towards increased evaluations is improving.20 Those planning and reporting
compliance with audit and feedback with increasing baseline non- evaluations of quality improvement should do so in the
compliance. context of a systematic review. Similarly, those planning qual-
ity improvement activities should consider the results of
systematic reviews when doing so.
Box 6 Checklist for appraising systematic reviews

• Were the search methods used to find evidence (primary ACKNOWLEDGEMENTS


studies) on the primary question(s) stated? The Cochrane Effective Practice and Organisation of Care (EPOC)
• Was the search for evidence reasonably comprehensive? group is funded by the UK Department of Health. Jeremy Grimshaw
• Were the criteria used for deciding which studies to include holds a Canada Research Chair in Health Knowledge Transfer and
in the review reported? Uptake. The Health Services Research Unit is funded by the Chief Sci-
• Was bias in the selection of articles avoided? entist Office of the Scottish Office Department of Health. The views
• Were the criteria used for assessing the validity of the stud- expressed are those of the authors and not necessarily of the funding
ies that were reviewed reported? bodies. Phil Alderson became an EPOC editor in September 2002; the
• Was the validity of all the studies referred to in the text development of many of these methods predated his arrival and Phil
assessed using appropriate criteria (either in selecting stud- did not consider his contribution sufficient for authorship. We would
like to acknowledge his ongoing contribution to EPOC.
ies for inclusion or in analysing the studies that are cited)?
We would like to thank all the reviewers who have undertaken
• Were the methods used to combine the findings of the rel-
quality improvement reviews and helped us to develop many of the
evant studies (to reach a conclusion) reported?
ideas within this paper. This paper reflects our experiences of under-
• Were the findings of the relevant studies combined appro- taking reviews of quality improvement strategies over the last decade.
priately relative to the primary question addressed by the It promotes some of the tools developed by EPOC over this time
review? period, most of which are available freely from our website. We hope
• Were the conclusions made by the author(s) supported by that this paper will increase the number and quality of systematic
the data and/or the analysis reported in the review? reviews of quality improvement strategies and that some of these will
• Overall, how would you rate the scientific quality of this be done in collaboration with EPOC.
review?
Items 1–9 scored as done, not clear, not done.
Item 10 scored on a scale of 1 (major risk of bias) to 7 (mini- REFERENCES
1 Cochrane Collaboration. Glossary. In: Cochrane Collaboration.
mal risk of bias). Cochrane Library. Issue 2. Oxford: Update Software, 2003.
Adapted from Oxman and Oxman and Guyatt.33 34 2 NHS Centre for Reviews and Dissemination. Effect Health Care
1999;5:1–16.

www.qshc.com
Systematic reviews 303

3 Grimshaw JM, Shirran L, Thomas RE, et al. Changing provider 19 Oxman AD, Guyatt GH. A consumer’s guide to subgroup analyses. Ann
behaviour: an overview of systematic reviews of interventions. Med Care Intern Med 1992;116:78–84.
2001;39(Suppl 2):II2–45. 20 Lefebvre C, Clarke MJ. Identifying randomised trials. In: Eggar M,
4 Grimshaw JM, Eccles MP, Walker AE, et al. Changing physician Davey Smith G, Altman DG, eds. Systematic reviews in health care.
behavior. What works and thoughts on getting more things to work. J Meta-analysis in context. London: BMJ Publishing, 2001.
Cont Educ Health Professionals 2002;22:237–43. 21 Grimshaw JM, Thomas RE, MacLennan G, et al. Effectiveness and
5 Chalmers I. Trying to do more good than harm in policy and practice: efficiency of guideline dissemination and implementation strategies.
the role of rigorous, transparent, up-to-date, replicable evaluations. Ann Health Technol Assess 2003 (in press).
Am Acad Political Soc Sci 2003 (in press). 22 Juni P, Altman DG, Eggar M. Assessing the quality of randomised
6 Mulrow CD. Systematic reviews: rationale for systematic reviews. BMJ controlled trials. In: Eggar M, Davey Smith G, Altman DG, eds.
1994;309:597–9. Systematic reviews in health care. Meta-analysis in context. London: BMJ
7 Mowatt G, Grimshaw JM, Davis D, et al. Getting evidence into practice: Publishing, 2001.
the work of the Cochrane Effective Practice and Organisation of Care 23 Cook TD, Campbell DT. Quasi-experimentation: design and analysis
Group (EPOC). J Cont Educ Health Professions 2001;21:55–60. issues for field settings. Chicago: Rand McNally, 1979.
8 Alderson P, Bero L, Grilli R, et al, eds. Cochrane Effective Professional 24 Whiting-O’Keefe QE, Henke C, Simborg DW. Choosing the correct unit
and Organisation of Care Group. In: Cochrane Collaboration. Cochrane of analysis in medical care experiments. Med Care 1984;22:1101–14.
Library. Issue 2. Oxford: Update Software, 2003. 25 Ukoumunne OC, Gulliford MC, Chinn S, et al. Methods for evaluating
9 Effective Practice and Organisation of Care Group (EPOC). 2003. area-wide and organisation based interventions in health and health
http://www.epoc.uottawa.ca (accessed 27 April 2003). care: a systematic review. Health Technol Assess 1999; 3(5).
10 Clarke M, Oxman AD, eds. Cochrane Reviewers’ Handbook 4.1.6. In:
26 Rao JNK, Scott AJ. A simple method for the analysis of clustered binary
Cochrane Collaboration. Cochrane Library. Issue 2. Oxford: Update
data. Biometrics 1992;48:577–85.
Software, 2003. Updated quarterly.
27 Donner A, Klar N. Issues in meta-analysis of cluster randomised trials.
11 Egger M, Davey Smith G, Altman DG, eds. Systematic reviews in health
Stat Med 2002;21:2971–80.
care. Meta-analysis in context. London: BMJ Publishing, 2001.
12 Cooper H, Hedges LV, eds. The handbook of research synthesis. New 28 Ramsay C, Matowe L, Grilli R, et al. Interrupted time series designs in
York: Russell Sage Foundation, 1994. health technology assessment: lessons from two systematic reviews of
13 Egger M, Davey Smith G. Principles and procedures for systematic behaviour change strategies. Int J Technol Assess Health Care 2003 (in
reviews. In: Egger M, Davey Smith G, Altman DG, eds. Systematic press).
reviews in health care. Meta-analysis in context. London: BMJ Publishing, 29 Peto R. Why do we need systematic overviews of randomised trials? Stat
2001. Med 1987;6:233–41.
14 Eccles M, Grimshaw J, Campbell M, et al. Research designs for studies 30 Bushman BJ. Vote counting methods in meta-analysis. In: Cooper H,
evaluating the effectiveness of change and improvement strategies. Qual Hedges L, eds. The handbook of research synthesis. New York: Russell
Saf Health Care 2003;12:47–52. Sage Foundation, 1994.
15 Grilli R, Ramsay CR, Minozi S. Mass media interventions: effects on 31 Freemantle N, Harvey EL, Wolf F, et al. Printed educational materials:
health services utilisation. In: Cochrane Collaboration. Cochrane Library. effects on professional practice and health care outcomes (Cochrane
Issue 2. Oxford: Update Software, 2003. Review). In: Cochrane Library. Issue 1. Oxford: Update Software, 1997.
16 Effective Practice and Organisation of Care Group (EPOC). The 32 Thompson SG. Why sources of heterogeneity in meta-analysis should be
data collection checklist, section 2.1.1. http://www.epoc.uottawa.ca/ investigated. BMJ 1994;309:1351–5.
checklist2002.doc (accessed 27 April 2003). 33 Sterne JAC, Egger M, Davey Smith G. Investigating and dealing with
17 Gotzsche PC. Why we need a broad perspective on meta-analysis. BMJ publication and other biases. In: Eggar M, Davey Smith G, Altman DG,
2000;321:585–6. eds. Systematic reviews in health care. Meta-analysis in context. London:
18 Jamtvedt G, Young JM, Kristoffersen DT, et al. Audit and feedback: BMJ Publishing, 2001.
effects on professional practice and health care outcomes. In: Cochrane 34 Oxman AD. Checklists for review articles. BMJ 1994;309:648–51.
Collaboration. Cochrane Library. Issue 2. Oxford: Update Software, 35 Oxman AD, Guyatt GH. The science of reviewing research. Ann NY
2003 (submitted for publication). Acad Sci 1993;703:123–31.

Want full text but don't have

a subscription?

Browse the view


Pay per Archive
For just US$8 you can purchase the full text of individual articles using our secure online ordering service.
You will have access to the full text of the relevant article for 48 hours during which time you may
download and print the pdf file fo personal use.

www.qshc.com

www.qshc.com

You might also like