10 1016@j Healthpol 2019 04 006
10 1016@j Healthpol 2019 04 006
10 1016@j Healthpol 2019 04 006
Health Policy
journal homepage: www.elsevier.com/locate/healthpol
a r t i c l e i n f o a b s t r a c t
Article history: Objective: This study aimed to establish whether longitudinal participation in an accreditation program
Received 11 December 2018 is translated into improvement in continuity of quality patient care and human resource management
Received in revised form 7 April 2019 (HRM) processes outcomes.
Accepted 24 April 2019
Materials and methods: This was a secondary data analysis of accreditation panel data from acute hospitals
participating in the Australian Council on Healthcare Standards’ Evaluation and Quality Improvement
Keywords:
Program (EQuIP). EQuIP criteria data from 311 hospitals were collected by external surveyors across
Accreditation
2003–2006 (Time 1) and 2007–2010 (Time 2). Mandatory accreditation criteria ratings at Time 1 were
Quality improvement
Measurement of quality
used to determine hospital performance group membership (1 = below moderate, 2 = moderate, 3 = above
Patient outcomes moderate). Analysis was undertaken of ratings across continuity of quality patient care and HRM process
Human resources criteria, at Time 1 and 2.
Process change Results: Continuity of quality patient care and HRM processes improved across time in the three per-
formance groups. Lower performing hospitals improved at a greater rate than moderate and higher
performing hospitals. The groupings and performance order did not change over time.
Conclusions: An accreditation program is an external driver that facilitates continual and systemic quality
improvement changes to sub-systems with an organisation.
© 2019 Elsevier B.V. All rights reserved.
https://doi.org/10.1016/j.healthpol.2019.04.006
0168-8510/© 2019 Elsevier B.V. All rights reserved.
662 D. Greenfield et al. / Health Policy 123 (2019) 661–665
[15]. Whether improvements are maintained across cycles, or are Participating organisations are required to undertake actions to
realised in other settings, is not known. self-assess and improve their performance against the accredita-
The associations between accreditation and organisational- tion standards. This includes focusing on both organisational and
level quality of patient care or process outcomes have normally clinical systems and processes [11,14]. An organisation assesses
been examined at single points in time. We have limited empirical itself against the EQuiP Standards and produces a self-assessment
evidence of the longitudinal impact of participation in an accredi- report for the AHCS. This report is reviewed by an external peer-
tation program on organisational performance outcomes. One case survey team which also conducts an on-site visit to verify the
study that has taken a longitudinal focus demonstrated that one improvement claims, documentation and care practices. The sur-
hospital, in a developing country, did sustain improvements across vey team provides a written report back to ACHS, which can
the accreditation program cycle [3]. Further investigations are nec- include recommendations for further improvement and the grant-
essary to reveal if this finding holds for other organisations, in both ing of accreditation status. Within this four year period, usually
developing and developed countries. This is an issue with signifi- at mid-point or thereabouts, a further onsite survey assessment is
cant policy, financial and, quality and safety implications. undertaken. A surveyor team visits to corroborate the continued
A key question concerning many stakeholders is: does the qual- achievement of the safety and quality standards, implementation
ity improvement component of an accreditation program, across of any recommendations and confirm ongoing accreditation status
several cycles, translate into identifiable organisational outcomes? [21–23].
That is, across accreditation cycles, does organisational perfor- External peer-surveyors, employed by ACHS, visited and rated
mance improve, stall or decrease? Furthermore, for organisations hospitals against the EQuIP standards criteria across two accred-
with different levels of accreditation performance, do they demon- itation periods. As per the ACHS accreditation programs normal
strate the same or different levels of improvement longitudinally? practice, different survey teams visited each hospital in each
The unique contribution of this study was to investigate these period. Where possible at least one member of the survey team
issues through focusing upon continuity of quality patient care was retained for the subsequent visit. Previous research into
and HRM processes. This study aimed to establish whether lon- the accrediting agency’s surveyor program has demonstrated the
gitudinal participation in an accreditation program is translated strategies and processes that promote reliability [24]. This includes
into improvement in continuity of quality patient care and HRM the requirement that surveyors attend annual training address-
processes outcomes. ing knowledge developments and current interpretation directions
associated with the program standards and assessing practices.
Survey teams were also recognised as an important mediating
2. Method influence on individuals, promoting consistency in interpretation
and reliability in an accreditation program [24]. Nevertheless, in an
2.1. Program setting, sample and research design attempt to account for variations in individual surveyor and survey
team assessments the study population of hospitals were classi-
The study focused on the ACHS accreditation program, the Eval- fied into one of three mandatory accreditation performance groups,
uation and Quality Improvement Program (EQuIP), implemented in which became the focus of analysis.
Australia [18]. At the time of the study, accreditation of acute and The first assessment period was between 2003-6, and used
sub-acute care organisations was a policy endorsed by State Gov- EQuIP3 (Time 1) and the second was between 2007-10, and used
ernments and the ACHS program accredited facilities from all States EQuIP4 (Time 2). The EQuIP standards are rated on a five-point
and Territories across Australia. ACHS was the major accreditation scale, from ‘1’ indicating low achievement, ‘3’ corresponding to
agency for acute and sub-acute care organisations, with over 1300 moderate achievement to ‘5’ representing outstanding achieve-
facilities accredited, representing over 90% market share [19]. ACHS ment. Inclusion criteria for study participants were that they were
has been facilitating an accreditation program for 40 years, com- accredited through the ACHS accreditation program, from the pub-
mencing in 1974; organisations now have participated in several lic or private acute hospital sector, for both time periods. The study
accreditation cycles, including under EQuIP [20]. used participating organisations’ EQuIP mandatory standards cri-
ACHS uses the approach of responsive regulation [21]. That teria outcomes as secondary panel data for analysis.
is, engaging with industry representatives to develop, implement
and revise EQuIP; the development process is incremental with 2.2. Measures and analysis
revisions approximately every three years to incorporate develop-
ments in the safety and quality knowledge base. The adjustments Following the processes established by Townsend et al. the
from editions three to four were to increase the focus on consumer study measures and analysis processes were implemented [1,2].
participation and the need for evidence of clinical and organisa- First, three hospital specific details were used as control variables
tional outcomes [19]. EQuIP, based on the principle of continuous in the analysis: ownership (1 = public, 2 = private); geographical
quality improvement, operates on a four-year cycle and is divided regions (state/territory regions were binary coded 0 = hospital not
into three broad assessment categories: clinical, support and cor- in this geographic region, 1 = hospital in this geographical region);
porate criteria [19]. Across these categories there are 13 standards and hospital size (1 = 1–49 beds, 2 = 50–99 beds, 3 = 100–199 beds,
with 45 criteria in total - 14 mandatory and 31 non-mandatory 4 = 200–499 beds, 5 = more than 500 beds).
items. The breakdown is as follows: clinical category, six standards Second, mandatory accreditation performance groups were
and 21 criteria - seven mandatory, 14 non-mandatory; support derived. Data were analysed to classify participating organisa-
category, five standards and 14 criteria - three mandatory, 11 non- tions into one of three mandatory accreditation performance groups:
mandatory; and, corporate category, two standards and 10 criteria below moderate (1), moderate (2) and above moderate (3). The
- four mandatory, six non-mandatory. An organisation’s quality groupings were achieved by calculating an average mandatory
and safety achievements, and efforts to implement improvement accreditation performance score, for each hospital at Time 1; this is
strategies, are rated against a five-point scale (Little Achievement, the first assessment for each hospital and different for each [12,13].
Some Achievement, Moderate Achievement, Extensive Achieve- Finally, two further measures were derived from the accredi-
ment and Outstanding Achievement). Ratings of at least Moderate tation data: an HRM processes score and a continuity of quality of
Achievement against the mandatory criteria are necessary to obtain patient care score, which were composites of five and six items,
accreditation status. respectively. Organisational means for each score were calculated
D. Greenfield et al. / Health Policy 123 (2019) 661–665 663
Table 1
Repeated measures analysis of covariance results for dependent variables.
Type III sum of F Partial eta Type III sum of F Partial eta df
squares squared squares squared
Within-subjects effects
Time .000 .007 .000 .016 .383 .001 1, 302
Time X mandatory accreditation performance groups 2.062 18.767*** .111 1.741 20.755*** .121 2, 302
Between-subjects effects
Mandatory accreditation performance groups 2.773 42.927*** .221 2.615 47.873*** .241 2, 302
Note: Analyses are controlling for hospital sector, geographical region and size.***p < .001.
[3] Devkaran S, O’Farrell PN. The impact of hospital accreditation on quality agement: lessons from an Australian study. International Journal for Quality in
measures: an interrupted time series analysis. BMC Health Services Research Health Care 2014;26:372–7.
2015;15:137. [16] Greenfield D, Braithwaite J, Pawsey M. Health care accreditation surveyor styles
[4] Mitchell JI, Izad Shenas SA, Kuziemsky C. Governance standards: a roadmap typology. International Journal for Quality in Health Care 2008;21:435–43.
for increasing safety at care transitions. Healthcare Management Forum [17] Touati N, Pomey M-P. Accreditation at a crossroads: are we on the right track?
2015;28:28–33. Health Policy 2009;90:156–65.
[5] Mumford V, Forde K, Greenfield D, Hinchcliff R, Braithwaite J. Health services [18] Australian Council on Healthcare Standards. National report on health services
accreditation: what is the evidence that the benefits justify the costs? Interna- accreditation performance 2003-2006. Sydney: Australian Council of Health-
tional Journal for Quality in Health Care 2013;25:606–20. care Standards; 2007.
[6] Saleh SS, Bou Sleiman J, Dagher D, Sbeit H, Natafgi N. Accreditation of hospi- [19] ACHS. 2008-2009 annual report. Ultimo, NSW: The Australian Council on
tals in Lebanon: is it a worthy investment? International Journal for Quality in Healthcare Standards; 2009.
Health Care 2013;25:284–90. [20] ACHS. Background on accreditation. ACHS; 2018 https://www.achs.org.au/
[7] Shaw C. Accreditation is not a stand-alone solution. Eastern Mediterranean about-us.
Health Journal 2015;21:226–31. [21] Greenfield D, Pawsey M, Braithwaite J. Accreditation: a global regulatory
[8] Greenfield D, Pawsey M, Braithwaite J. What motivates professionals to engage mechanism to promote quality and safety. In: Sollecito W, Johnson J, editors.
in the accreditation of healthcare organizations? International Journal for Qual- Continuous quality improvement in health care. New York: Jones and Barlett
ity in Health Care 2011;23:8–14. Learning; 2013. p. 513–31.
[9] Australian Commission on Safety and Quality in Health Care. Annual report [22] Greenfield D, Hinchcliff R, Westbrook M, Jones D, Low L, Johnston B, et al.
2012-13. Sydney: Australian Commission on Safety and Quality in Healthcare; An empirical test of accreditation patient journey surveys: randomised trial.
2013. International Journal for Quality in Health Care 2012;24:495–500.
[10] Hinchcliff R, Greenfield D, Moldovan M, Westbrook JI, Pawsey M, Mumford V, [23] Greenfield D, Moldovan M, Westbrook M, Jones D, Low L, Johnston B, et al. An
et al. Narrative synthesis of health service accreditation literature. BMJ Quality empirical test of short notice surveys in two accreditation programs. Interna-
& Safety 2012;21:979–91. tional Journal for Quality in Health Care 2012;24:65–71.
[11] Alkhenizan A, Shaw C. Impact of accreditation on the quality of healthcare [24] Greenfield D, Pawsey M, Naylor J, Braithwaite J. Are accreditation surveys reli-
services: a systematic review of the literature. Annals of Saudi Medicine able? International Journal for Quality in Health Care 2009;22:105–16.
2011;31:407–16. [25] Cohen J, Cohen P. Applied multiple regression/correlation for the behavioral
[12] Braithwaite J, Greenfield D, Westbrook J, Pawsey M, Westbrook M, Gibberd R, sciences. Hillsdale: Erlbaum; 1983.
et al. Health service accreditation as a predictor of clinical and organisational [26] Greenfield D, Braithwaite J. Health sector accreditation research: a systematic
performance: a blinded, random, stratified study. Quality & Safety in Health review. International Journal for Quality in Health Care 2008;20:172–83.
Care 2010;19:14–21. [27] Pomey M-P, Contandriopoulos A-P, François P, Bertrand D. Accreditation: a
[13] Miller MR, Pronovost P, Donithan M, Zeger S, Zhan C, Morlock L, et al. Rela- tool for organizational change in hospitals? International Journal for Quality in
tionship between performance measurement and accreditation: implications Health Care 2004;17:113–24.
for quality of care and patient safety. American Journal of Medical Quality [28] Desveaux L, Mitchell JI, Shaw J, Ivers NM. Understanding the impact of accred-
2005;20:239–52. itation on quality in healthcare: a grounded theory approach. International
[14] Falstie-Jensen AM, Nørgaard M, Hollnagel E, Larsson H, Johnsen SP. Is com- Journal for Quality in Health Care 2017;29:941–7.
pliance with hospital accreditation associated with length of stay and acute [29] Berssaneti FT, Saut AM, Moreno MC. Evaluating the impact of accreditation on
readmission? A Danish nationwide population-based study. International Jour- Brazilian healthcare organizations: a quantitative study. International Journal
nal for Quality in Health Care 2015;27:450–7. for Quality in Health Care 2017;29:713–21.
[15] Greenfield D, Kellner A, Townsend K, Wilkinson A, Lawrence SA. Health service
accreditation reinforces a mindset of high-performance human resource man-