Adm Policy Ment Health (2008) 35:458–467
DOI 10.1007/s10488-008-0189-4
ORIGINAL PAPER
Quality Assurance and Improvement Practice in Mental Health
Agencies: Roles, Activities, Targets and Contributions
Curtis McMillen Æ Luis E. Zayas Æ Samantha Books Æ
Madeline Lee
Published online: 8 August 2008
Springer Science+Business Media, LLC 2008
Abstract Accompanying the rise in the number of mental
health agency personnel tasked with quality assurance and
improvement (QA/I) responsibilities is an increased need
to understand the nature of the work these professionals
undertake. Four aspects of the work of quality assurance
and improvement (QA/I) professionals in mental health
were explored in this qualitative study: their perceived
roles, their major activities, their QA/I targets, and their
contributions. In-person interviews were conducted with
QA/I professionals at 16 mental health agencies. Respondents perceived their roles at varying levels of complexity,
focused on different targets, and used different methods to
conduct their work. Few targets of QA/I work served as
indicators of high quality care. Most QA/I professionals
provided concrete descriptions of how they had improved
agency services, while others could describe none.
Accreditation framed much of agency QA/I work, perhaps
to its detriment.
Keywords Quality assurance Quality improvement
Quality Accreditation
A recent federal report identified quality as one of the most
pressing issues in mental health services and the
C. McMillen (&) S. Books M. Lee
Center for Mental Health Services Research George Warren
Brown School of Social Work, Washington University
in St. Louis, Washington University Campus Box 1196,
One Brookings Drive, St. Louis, MO 63130, USA
e-mail: cmcmille@wustl.edu
L. E. Zayas
School of Social Work, Arizona State University, Tempe,
AZ, USA
123
implementation of quality assurance and improvement (QA/
I) systems as one of the most promising means to improved
care (Institute of Medicine 2006). Mental health agencies
have been increasingly required by their accrediting bodies
to specify and implement plans to continuously monitor and
improve the quality of the services they provide (Commission on Accreditation of Rehabilitation Facilities 2008;
Council on Accreditation 2008; Joint Commission 2008). As
a result, many mental health agencies have hired employees
to conduct QA/I work, creating an unprecedented opportunity for service improvement.
Conceptually, modern QA/I models focus on determining
which service processes and outcomes are most important,
and determining how to measure and monitor them to identify
areas for improvement. They concentrate on searching for
key causes of identified quality problems, devising creative
solutions to these problems, implementing these changes, and
continuing to monitor and learn from these implementation
efforts (Crosby 1979; Deming 1986; Donabedian 2003; Harry
1988; Pande et al. 2000 Walton 1990; Zirps and Cassafer
1996). In addition, Hermann et al. (2006) recently touted the
important role that QA/I professionals could potentially play
in monitoring fidelity to evidence-based mental health treatments, but noted the likely absence of this work to date.
However, no research has been conducted that characterizes
QA/I practice in mental health organizations, describes the
targets for monitoring and improvement in these agencies and
whether these efforts result in genuine service improvement.
Some concern has been expressed that QA/I professionals
may focus their efforts on targets not related to service quality
(Hermann 2005) or that these efforts are easily derailed by
burdensome regulation and parochial views of what QA/I can
accomplish (Shortell et al. 1998). In fact, it remains unclear
from the conceptual literature where the boundaries exist in
QA/I work. Since a broad number of things could result in
Adm Policy Ment Health (2008) 35:458–467
improved services, what is and is not QA/I in mental health
services?
Given that no prior research has been conducted, qualitative methods were determined appropriate to explore
issues related to QA/I activities and targets. We interviewed QA/I professionals from mental health agencies in
one U.S. geographic region to explore four questions: (1)
How do QA/I professionals perceive their role? (2) What
are their major work activities? (3) What are the targets of
their activities (what they are monitoring, measuring, and
improving)? And (4) what do they perceive as their major
contributions to the agencies they serve?
Methods
The study involved individual semi-structured interviews
and a small secondary structured interview to collect
demographic information with a purposeful sample of 16
QA/I professionals employed in private children’s mental
health services agencies in the St. Louis region. A maximum variation sampling strategy, which seeks to capture
the broadest range of information and perspectives on the
problem of study from diverse sources (Kuzel 1999) was
used to identify 23 local mental health agencies. Four of
these agencies reported not having a designated QA/I
employee. At one other agency, the position was vacant.
One agency was unresponsive to our recruitment efforts
and one agency QA/I professional declined to participate,
leaving 16 agencies (of 18 eligible, 89%) and their designated QA/I professional. The sampling included three
agencies that could be considered small (\US$2M annual
budgets) and three agencies that could be considered large
([US$5M budgets). Four agencies were traditional community mental health centers, five included a residential
treatment component and all provided mental health services to children and families. Fifteen of the 16 agencies
were accredited, 10 by COA, four by the Joint Commission, and one by CARF. Five of the agencies had more than
one QA/I professional. We interviewed the person in
charge; if that person had been there less than 6 months,
we interviewed the person with the most seniority, which
was the case in one instance.
Recruitment of QA/I professionals was through mailed
letters followed by phone calls between January and April
of 2007. A $30 incentive was offered to participants. One
QA/I professional was recruited per agency, and to be
eligible had to be employed in that capacity for at least the
past 6 months. Study participants had a mean of 7.81
(±5.6) years of experience in QA/I. Three participants
were male. One was African American; the others were
Caucasian. To determine sample size based on the principle of data saturation (the point at which additional
459
participants yield little new information), the process of
data analysis ran concurrently with data collection. Saturation was reached by the 10th interview, but we decided to
complete interviews for other agencies that we had contacted to capture greater depth and diversity of experience.
In person, in-depth qualitative interviewing was the
main research strategy used, allowing us to explore the QA/
I professionals’ roles from their point of view without
a priori demarcations (Miller and Crabtree 1999). Interviews tended to last approximately 60 min. Of our nine
initial interview questions, five (shown in the Appendix)
were germane to the focus in this paper. We asked QA/I
professionals (1) to describe their work and role, (2) to
describe what they did last week (usually while reviewing
their appointment calendar), (3) to describe routine tasks
they undertake, (4) to describe what earned their supervisors’ praises and (5) to specify what they had done as QA/I
professionals that made a difference. The team’s medical
anthropologist conducted several interviews and trained
and supervised two other interviewers. Interviews were
conducted from February to May 2007. All study participants provided informed consent to participate in this
study, which was approved by an Institutional Review
Board.
The interviews were audio recorded, professionally
transcribed and transferred into NVivo 7 (QSR 2006) for
data management. The authors constituted the analytical
team. The analysis followed a grounded theory approach,
an inductive iterative process of open coding that involves
breaking down, examining, and categorizing data (Strauss
and Corbin 1990). First, the analysts separately reviewed
the first eight transcripts as they were prepared. They met
weekly to discuss the content of these transcripts and to
consider emerging categories in order to develop the
codebook by consensus. After the codebook was completed, two analysts separately coded the first two
transcripts and then compared and adjusted their coding
patterns to standardize coding procedures before they
coded remaining transcripts in NVivo7. Coding reports
were then produced for further analyses. For some analyses, where the analytic task was to develop particular
categories from coded data, multiple readers were involved
and differences reconciled. For other analyses, where the
purpose was to identify key illustrative passages, the lead
author reported findings back to the team for interpretation,
critique and synthesis.
Results
Results are organized by research question, focusing on
role construction, activities, targets and major contributions
of QA/I practice. The QA/I professionals perceived their
123
460
roles differently, used different methods to carry out their
responsibilities, focused on different targets and described
different levels of contributions, but their regular tasks
were often similar.
Perceptions of the QA/I Role
QA/I professionals portrayed their jobs in very different
ways. Some professionals described their work in narrow
terms, focusing on one major activity that encompassed
most of their effort (5/16), while many others described
their work in broad conceptual terms, mostly related to
leading, organizing and managing the QA/I process (9/16).
Two provided long lists of responsibilities (2/16) and
resisted efforts to describe their role in more general terms.
Among the QA/I professionals who defined their work
narrowly, the emphases varied. One stressed chart reviews.
‘‘Chart review is really about it. That’s my main focus. We
are a large agency [and] have a lot of charts, so it takes up a
lot of time.’’ Another respondent defined the work in terms
of creating, administering and interpreting survey data.
‘‘Everyone who comes in contact with the agency gets a
survey,’’ said this professional. Another focused on writing,
receiving and distributing reports on agency activities. ‘‘I
make sure all this paperwork is being funneled through the
channels.’’ Another used a variety of strategies but said the
job was all about monitoring. ‘‘I would say the main gist of
what I do is monitoring and monitoring, the monitoring
that goes on in the program.’’ Several of the QA/I professionals who described their jobs narrowly worked at
smaller agencies.
For those who defined the job more broadly, there was at
least a partial focus on developing a program of QA/I
activities for the agency. ‘‘Part of my scope of responsibility is to set the direction for quality activities at the
agency,’’ said one of these professionals. Yet the respondents defined this mission using different terms. Said one:
‘‘My job is to ensure that we meet our [agency] objectives.’’ Another respondent described leading the effort to
make sure that clients achieved their outcomes: ‘‘I am
responsible for collecting and managing the outcomes
throughout the organization.’’ Still another saw the work as
finding processes that were not working and making them
better: ‘‘My job now is almost 100% process improvement.’’ Several of the QA/I professionals that described
their jobs more broadly worked at larger agencies.
One professional saw her job largely as a fixer of
problems, especially those that threaten the financial survivability of the agency. ‘‘My role relates to dealing with
issues that might be problematic for the agency…The
position was created to deal with survival issues… If we
don’t have proper documentation, we lose funding. Once,
we had I think 1000 issues [from an external audit].
123
Adm Policy Ment Health (2008) 35:458–467
That’s a lot of money [to lose] for a small agency. We were
able to get that down to 80 questionable items. Billing is
very important.’’
External requirements admittedly defined the work of
some of the respondents who described their role broadly
and conceptually. One of these professionals described the
work as ‘‘aligning the agency’s day-to-day work with the
[requirements of] external sources that provide guidance
about how we are supposed to do our work.’’ Another
described the work in terms of following the rules: ‘‘We
make sure that all of our programs are doing what they are
supposed to be doing according to their contract guidelines,
their program plan guidelines, as well as accreditation and
licensing standards, any kind of governing body.’’ To this
respondent, QA/I professionals ‘‘…become the experts on
rule and procedure and how it applies to programs.’’
External requirements, especially accreditation, also
framed the work of QA/I professionals who described long
lists of job responsibilities. Said one respondent, ‘‘I do many
things at the agency for [QA/I]. The bulk of that I would say
would be related to our national accreditation.’’ Another
said, ‘‘It’s really making sure that everything we do helps us
to meet our accreditation standards. This is a very important
criterion for us.’’ QA/I professionals at large and small
agencies defined their jobs in terms of accreditation.
Although implicit in several respondents’ transcripts, two
QA/I professionals explicitly described a dual function
involving both meeting compliance standards and putting in
place a QA/I system. ‘‘One of the major pieces that I’m
involved in is making sure that we are in compliance with all
of the accreditation standards. We are accredited by the Joint
Commission. So, in many ways that’s what frames my
job…Then, also it’s my responsibility to define what our
process is for quality improvement.’’
Major Activities of QA/I Professionals
Despite describing their roles differently, these QA/I professionals reported spending substantial parts of their time
in similar activities: leading and serving on committees,
collecting and analyzing data, and writing various reports
on the results of the data analyses. Fifteen of the 16
respondents talked about regularly attending and leading a
number of meetings and committees. ‘‘A lot of the work is
done via committees,’’ said one respondent. Several QA/I
professionals described an overall committee in charge of
the agency’s QA/I process, which the QA/I professional
often led. Several also mentioned being part of an overall
agency executive management or leadership committee.
They also reported serving on committees related to
employee credentialing, billing, forms, medical records,
accreditation, safety, incident reports, security, building
and grounds, HIPAA compliance and clinical care, as well
Adm Policy Ment Health (2008) 35:458–467
as ad hoc committees created to address targeted problems.
Some agencies had a process in which the administration
reviewed QA/I monitoring information and developed
committees to address identified quality problems.
The administration comes up with a focus where they
want a committee to look at possible improvements.
Then, they come up with an opportunity statement,
which is a broad statement of the problem, and what
they kind of are about as a solution. They assign
people to sit on that committee based on their information, and their knowledge about the particular
thing. Then, they’ll assign me to be the chairperson of
that Ad Hoc committee, because I’m the one person
who’s trained in the process.
Although academics and institutional review boards
often emphasize the difference between QA/I activities and
research (e.g., Bellin and Dubler 2001), the respondents
classified much of their work as research, involving data
collection, data management or data analysis. This work
was reported by all but one respondent. Chart reviews,
client interviews, mailed surveys, and use of agency
administrative databases were described as routine methods of data collection.
The work varied substantially based on whether an
agency had sophisticated electronic client record systems.
One respondent said that with his/her agency’s record
system, ‘‘I can just generate reports…push a button, and it
will tell me anything I need to know and categorize data
any way I want.’’ In these agencies, QA/I professionals
reported being involved in developing and managing these
systems.
I’m not an information systems professional at all, I
have a degree in social work. But, the task in the last
5, 6 years has been for me to be the bridge between
operational needs and the programmer, so that we
create an information system that has value for our
work here and allows us to do our work as efficiently
as possible.
The process was quite different at agencies without an
electronic client record, where they relied heavily on chart
reviews: ‘‘We have these file reviews and program
reviews…and we’re taking that information and it moves
up to me and then I do my tallying and aggregating and I
create spreadsheets for that.’’
Communication activities that involved documenting
and disseminating results were described by 13 of the 16
participants as part of their work routine. Several mentioned having various reports due each quarter: ‘‘Quarterly
I need to draw all that information together to say, ‘What
have we been working on? What trends have we identified?
What do we need to improve?’’’ Others mentioned reports
461
required by accreditors, funders, and regulators, each in its
own specified format. ‘‘We have an annual Maintenance of
Accreditation Report. That is a year-end report that the
Council on Accreditation requires. They say, ‘tell us
everything you did in 2006 that shows you do quality
improvement work.’ And you literally have to write a
report saying everything you did in 2006.’’ Others said that
they created different versions of the same report for different audiences because ‘‘what’s useful for the executive
director is not useful for our ground staff.’’
In addition, several respondents described other
responsibilities that were assigned to them in their QA/I
role that would not be considered traditional QA/I work
using classic formulations (e.g., Donabedian 2003). This
included, for example, being responsible for the agency
policy and procedure manuals, developing disaster plans,
managing information systems, or serving as the HIPPAA
privacy officer. Two QA/I professionals mentioned that
they were often pulled from their primary job to write
grants. In addition, the QA/I professionals mentioned being
assigned a variety of tasks that needed done, but there was
no obvious person within the organization to whom to
assign the task. For example, one QA/I professional was
asked to lead an effort to develop a plan to respond to a
bird flu pandemic. As one participant declared, ‘‘Everything falls within the QA rubric.’’
The Targets of QA/I Activities
QA/I professionals reported that their work was aimed at
monitoring and improving (1) service provision, (2) safety,
(3) consumer outcomes, (4) consumer perspectives, (5)
staff perspectives and issues, (6) community perspectives,
and (7) productivity and finances. Table 1 presents specific
examples of expressed targets supporting each of these
categories. No one participant mentioned targets covering
all of the seven categories. Some agencies focused primarily on outcomes, some on safety, and some on service
provision. Some respondents, however, reported that their
agencies were monitoring an enormous number of things,
while other agencies appeared to be monitoring little or
nothing. QA/I professionals that were monitoring many
things said that this was due to accreditation and other
external requirements. Some of the targets for monitoring
were not things that would typically be included as part of
the QA/I function (fundraising, finances, mileage).
Although most agencies were monitoring some aspect of
service provision, few of their specific targets seemed to get
at the core dimensions of quality as described in the conceptual literature (Martin 1993; Megivern et al. 2007), even
when reduced to its most common elements such as technical proficiency and interpersonal sensitivity (Megivern
et al. 2007). The most common targets in this category, for
123
462
Adm Policy Ment Health (2008) 35:458–467
Table 1 Categories of QA/I targets: what is being monitored and changed in QA/I efforts?
Category
Service provision
Stated targets
Presence of treatment plans
Whether care meets regulators’ requirements
Whether treatment plans are completed within specified
time frames
Level of family involvement
Whether treatment plans are signed by consumers
Screening for specific conditions
Whether objectives are specified in the treatment plan
Indications of required assessments (annual psychiatric
evaluation, dental exam, physical exam)
Whether objectives are written for needs identified in
assessments
Evidence based practices
Presence of progress notes
Are consumers being seen (provider fraud detection)?
Whether referrals are made (when screening indicates a
problem)
Are consumers being seen within specified timeframes?
Whether follow ups are made with clients after missed
appointments
Were clients assisted with medication reminder lists?
Socialization opportunities
Medication documentation
Safety/risk mgt.
Adverse or critical events (general statements)
Child maltreatment reports
Physical injuries to staff or consumers
Presence of medication procedures
Suicidal behavior
Infection surveillance
Facility safety
Fire drills conducted
Medication errors
Do staff know what to do in emergencies?
Medication tracking (in/out of agency)
Staff CPR certifications
Use of restraints, locked isolation
Staff TB testing, flu shots
HIPAA compliance
Outcomes
Consumer perspectives
Outcomes (general statements)
Recidivism
Clients’ perception of improvement
School truancy, suspension, expulsion
Whether treatment plan objectives were met
Hospitalization
Time remained sober
Consumer satisfaction
Consumer perceptions of safety
Consumer complaints
Food satisfaction
What clients wanted
Whether treated respectfully
Staff perspectives and
issues
Employee satisfaction
Employee safety (incidents)
Employee retention
Employee exit information
Employee perceptions of support, team relations,
attitudes toward supervisors and administrators, etc.
Credentialing (checking credentials)
Staff views of the functioning of other departments
Training completion
Number of days to fill staff vacancies
Community perspectives Community attitudes toward the agency
Productivity/finances
Whether billing reports were submitted
Number of clients served
Mileage
Fundraising
Productivity (billing) per worker
Finances
example, were whether there was a treatment plan in the
chart, whether it was signed by the client, and whether
progress notes were present. One agency’s QA/I department
was reported as having procedures that judged whether the
treatment received by clients was appropriate for their
problems, and another had procedures to evaluate whether
treatment received was appropriate to the psychiatric
123
Unmet community needs
Referral source satisfaction
diagnosis. One respondent mentioned evidence-based services, but no one reported monitoring fidelity to evidencebased treatment.
Descriptions of what respondents were evaluated on and
praised for by supervisors also informed our analyses. They
reported being praised for and evaluated on three general
areas: (a) achieving or maintaining accreditation (5/16), (b)
Adm Policy Ment Health (2008) 35:458–467
463
occurred, but we did some things structurally where we
shifted around some staff responsibilities, added some
staff, … we improved our documentation methods. We
automated our incident reports system, all towards
getting better data. We convened monthly a quality
committee that looked at the data and had big thoughts
about what we could do in response to it. Over the
course of a few years, and not to sound too grandiose,
but we changed the culture here, where locked seclusions and restraints became the exception to the rule,
rather than an immediate response to aggression. And,
we have the data to substantiate that because we’ve
continued to track the data.
improving the organization’s efficiency (5/16), and (c)
improving the organization’s results (3/16). Accreditation
was the primary yardstick against which several respondents
were measured.
The criteria for evaluation… I don’t want this to
sound bad, but it’s really making sure that everything
we do helps us to meet our accreditation standards.
This is a very important criterion for us and to not
meet those standards and to have trouble with the site
visit or to have major recommendations that put our
accreditation at risk are critical.
Said another: ‘‘If we didn’t get reaccredited, they’d look
at me and say, ‘What happened? What went wrong?… You
lead that effort.’’ Respondents were commended also for
making systems work better. For example, one respondent
earned praise for digitizing agency manuals and forms and
placing them on the agency’s intranet. Few QA/I professionals (3/16) reported that they were praised for or
evaluated on changing results for clients or programs. An
exception is a QA/I professional that was evaluated on
‘‘being able to foster … a culture within the organization
that allowed us to reduce the use of locked isolation and
physical restraints’’ (see example below).
The second detailed example involved a sustained effort
to markedly improve outcomes in a troubled program.
One example that comes to mind is a program where
they had a lot of turnover. Their supervisor left and in
the middle of it all [the program’s funder] came in to
do a review. It was not good. We got put on an
external corrective action plan that basically said, you
have to make these improvements or we’ll pull your
contract. So, we totally ramped up the services that
we provide through QI. We did weekly reviews of
their records. We worked very closely with the new
supervisor who was brought in, in terms of expectations and just helping her build on that corrective
action plan and use that corrective action plan as a
tool to focus her efforts and decide, okay these are the
first five things we’re going to work on; these are the
next five things we’re going to work on. Although it
took some time, [the funder] was able to see that we
had a plan. They were very pleased that QI was
involved and it made a huge difference. The program
was able to, within about 8 months, get off that
corrective action plan and now is probably one of the
stronger programs in our agency.
Major Contributions
We asked participants to provide examples of how their
work made a positive difference in client services or in the
agency. Several provided multiple examples. Ten respondents provided at least one in-depth answer that appeared
to involve a substantial contribution of one variety or
another. We detail the two examples that we felt most
directly demonstrate how QA/I can make a difference in
agency practice and outcomes. The first involves reducing
the use of physical restraints and locked isolation.
We had collected enough data internally to feel that our
own practices were out of control. We were using
restraint too often, were placing kids in locked seclusion too often for too long, and staff were getting hurt in
the process. So, it was real clear that we needed to do
something. We went about the process of doing many,
many things over the course of the next 3 or 4 years that
ultimately had a good impact in this whole area. We
embraced a new behavioral management model. We
got trained on it. We brought in in-house trainers. We
did extensive training with staff. We really worked to
improve our data sets on the use of incidents, in general,
but particularly the use of restraints and locked seclusions, so that we had a good sense of what our baseline
activity was as we got started. … I was the champion for
all this. I can’t take full credit for everything that
One QA/I professional talked about how client satisfaction was raised across the agency. Another talked about
how they raised client satisfaction across the agency. ‘‘We
raised client satisfaction scores last year. We spent a lot of
time figuring out what clients want and how we could
change the things we do.’’ The other examples offered by
respondents did not directly involve improving consumer
outcomes. They included:
•
•
•
Devising an electronic system that alerted supervisors
when a client was not assigned a case manager,
preventing clients from ‘‘falling through the cracks.’’
Leading an effort to get the agency up-to-date on
research in the field.
Installing a new client data management system that
reduced the recording burden on clinicians.
123
464
•
Adm Policy Ment Health (2008) 35:458–467
Problem solving a way to reduce no-shows in an
outpatient clinic.
One QA/I professional described how QA/I had
improved the agency’s financial bottom line by becoming
more competitive in grant applications as a result of having
documented consumer outcomes through a system that the
respondent had designed. Several mentioned that their data
collection and analysis activities uncovered problems that
would never have been addressed if they did not have data
from consumers. This included problems with food service,
accessibility, and unappealing bathrooms.
In stark contrast, six QA/I professionals struggled to come
up with a single concrete example of how they had made a
difference. The respondent who defined her job as all about
chart reviews said she/he once found a piece of information in
a chart that an auditor could not find. A QA/I professional
who defined the work as all about compliance with policies
said, ‘‘Just, you know, leading the effort in dotting every ‘i’
and crossing every ‘t.’’’ Similarly, another said, ‘‘I don’t
know if I have one example, but just everyday, like doing the
chart reviews. … So, basically if the charts are all in line, then
we’re fine from a billing standpoint.’’ One listed redesigning
the client satisfaction surveys as the major accomplishment.
For another, it was redesigning the agency’s progress note to
make it easier for the QA/I team to find the information that
they monitor. Another could not come up with an example
from her current job, but said, ‘‘At my previous agency, I was
able to get them nationally accredited.’’
Discussion
This is the first study to examine the activities and roles of
QA/I professionals in mental health agencies. The results
were illuminative in several regards. We focus this discussion on four issues: the difficulty several respondents
had in identifying ways that QA/I had made a difference in
their agencies, the substantial variation in QA/I activities
across agencies, how accreditation frames QA/I work, and
the lack of QA/I focus on quality service provision. Then,
we offer specific recommendations for mental health
administrators, policy makers and future research.
The fact that several QA/I professionals could not provide an example of how their work made a difference leads
us to conclude that their agencies were not well-served by
their QA/I work. Services and outcomes were not being
improved despite a great deal of QA/I activity. Those QA/I
professionals whose work focused on compliance with an
array of external standards and internal policies appeared to
be those least likely to detail how their work had improved
services or outcomes. QA/I work as conceptualized by
leading theorists is all about quality improvement, not
compliance. When the focus in these frameworks is placed
123
on monitoring (e.g., Donabedian 2003), it is on monitoring
as a means to finding quality problems that can be
improved. Here, it appears that some agencies were monitoring to be able to report that they were monitoring.
QA/I professionals’ activities varied substantially across
organizations, focusing on different targets and methods
and how they perceived their work. No uniform way of
doing QA/I in mental health agencies has developed and no
single profession or organization has taken on the task of
preparing QA/I professionals for their roles. This variation
may also reflect a developmental phenomenon. Agencies
may get more sophisticated in their QA/I work over time,
starting with collecting data on a few things in mostly
manual ways, and gradually developing more systems to
capture more and different kinds of data over time. But
more systems to monitor more targets do not necessarily
translate into effective QA/I interventions that improve
quality care and consumer outcomes.
Several QA/I professionals reported that the foci of their
work were largely determined by accreditation and other
external requirements rather than being thoughtfully determined through priority-setting procedures that targeted
problems that, if solved, would have substantial impact on
consumers’ lives. Few of the respondents’ stories involved
improved outcomes. Few of the professionals perceived that
their main job responsibility was improving services. And
few QA/I professionals reported earning praise for improving consumer outcomes. This focus away from improving
care and outcomes appeared to be especially acute in agencies that were attempting to monitor a large number of things
across many domains. Accreditation requires QA/I processes
to be in place to monitor and improve quality (e.g., COA
2008; CARF 2008). But, instead of using QA/I systems to
monitor for quality and shape practice toward what the
agency determines to be high quality service, some QA/I
professionals monitor for compliance to accreditation and
other external standards and work to shape practice toward
meeting them. This is a corruption of QA/I frameworks. In
order for QA/I professionals to have a substantial impact
using a compliance-based strategy, agencies must trust that
accreditors and regulators get it right, that promulgated
standards focus on the aspects of service most likely to
enhance consumer outcomes. This is a questionable
assumption, given the lack of research to date on the effect of
accreditation and regulation on consumer outcomes.
A second problem with focusing QA/I monitoring efforts
on compliance with standards is that there are a lot of them.
The Joint Commission, COA and CARF each have over 200
standards applicable to mental health programs for children
and families. COA mandates quarterly reviews of case
records, incidents, accidents, and grievances. It mandates the
assessment of consumer satisfaction, consumer outcomes
and evaluations of programs. It also requires monitoring of
Adm Policy Ment Health (2008) 35:458–467
465
operations and management and includes financial viability,
systems efficiency, and job satisfaction as examples of
operations monitoring. The Joint Commission mandates the
collection of consumers’ perceptions of care, treatment and
services, and measurement of medication management,
restraint use, seclusion use and treatments. Although less
prescriptive, CARF also mandates monitoring of business
functions and service effectiveness, efficiency, access and
satisfaction. This proliferation of standards is exacerbated by
the need for some agencies to monitor the standards of
multiple regulators. It may be costly yet intellectually easy to
drift into a ‘‘monitor-everything, but improve little’’ mindset.
Most mental health agencies have limited resources to
devote to QA/I. Therefore, they can only focus their energies
in a few areas at a time. These should be the areas where their
work can create the greatest impact.
Seven primary domains of QA/I targets emerged from this
research. All appear to be important aspects of agency life
and most have the potential to affect the quality of consumer
care. However, we were struck by how rarely a target for
improvement or monitoring reflected high quality service
processes. Noting the presence of a treatment plan is not an
indicator of quality, although a lack of one may reflect poor
quality. The lack of reported struggle about how to define
quality service in meaningful ways is itself notable. For QA/I
to improve the quality of mental health services, agencies
may need to do a better job of defining quality care.
Evidence-based treatment defines high quality care as care
delivered with fidelity to the intervention. As Hermann et al.
(2006) likely would have predicted, however, no QA/I professional reported monitoring treatment fidelity.
The QA/I role is impressive in its complexity. QA/I
professionals reported a lot of responsibilities, from
accreditation maintenance to improving consumer outcomes. Their work potentially covers a wide range of
knowledge, from clinical processes, to research methods, to
standards and rules, to management information science, and
more. To support this important work in mental health, the
QA/I role may benefit from professional academic preparation, intense continuing education, increased resourcing, and
systematic research on the effectiveness of specific QA/I
methodologies. The investment in these resources may
depend on whether QA/I truly improves the care consumers
receive and their clinical and functional outcomes. Some of
the examples related by QA/I professionals in this study
provides some preliminary suggestions that QA/I holds
potential to improve care, but only if done well.
administrators should think about how they want the QA/I
enterprise in their agencies to be constructed. For example,
do they want QA/I systems focused on agency objectives,
consumer outcomes, quality processes, or on external
standards? (2) Administrators should also assess the
breadth of QA/I monitoring that is ongoing at their agencies and determine the breadth of focus that is ideal or
feasible. A QA/I staff that is asked to monitor too many
things may lose the ability to identify and respond to the
most pressing quality problems. A QA/I staff that focuses
solely on the most pressing problems may improve delivered services, but may not maintain the monitoring that
external bodies require. Administrators can determine
whether they want a QA/I system based on monitoring a
wide number of things, or one that is based on identifying
and fixing big problems with big impacts. (3) Administrators should ask their QA/I team to detail the major ways
that they have improved services or agency functioning. If
they cannot answer the question, as some of our respondents could not, the administrator likely has a dysfunctional
QA/I team and may need to take steps to replace or retool
the team. (4) Administrators should ask their QA/I professionals to detail the indicators of high quality mental
health care that are monitored. If there are none, or the QA/
I team thinks the presence of a treatment plan is an indication of quality, the administrator should spur efforts to
develop some. (5) Agency administrators should resist the
temptation to fill up the portfolio of QA/I professionals’
responsibilities with ‘‘other duties as assigned.’’ QA/I work
has the potential to contribute in important ways, but only
if QA/I professionals can devote themselves to monitoring
and improving care. (6) Administrators should determine
whether the skills and qualifications of their QA/I team
meet the needs of what has become an evolving and
increasingly complex enterprise.
Implications for Administrators
Limitations and Research Implications
The results from this study lead us to six recommendations
for administrators in mental health agencies. (1) Since QA/
I systems can look very different from one another,
While this first look at QA/I professionals in mental health
agencies is informative, it is based on results from QA/I
professionals in one geographic region. Survey research on a
Policy Implications
Accreditors and other external regulators of mental health
services need to recognize the burden of the ever increasing
number of standards that agencies are asked to monitor. Not
only is it expensive to mount a system to monitor a high
number of standards, it may take focus away from efforts to
identify an agency’s primary quality problems and to
improve them. It is ironic that the same forces that led to the
hiring of QA/I professionals in mental health services may
hinder their effectiveness by diffusing their efforts.
123
466
larger, more representative scale is a logical next step to help
determine what it is that QA/I professionals are asked to do in
mental health agencies nationally and to better identify
national goals for the training and education of QA/I professionals for mental health. This beginning effort was a
necessary first step before this more representative work, as it
identifies ways the QA/I enterprise can be constructed and
possible categories of QA/I activities and targets that can be
used as the basis for survey content. This would allow, for
example, estimations of the time spent on the different kinds
of activities. In addition, the study collected views only from
the QA/I professional. Although this professional is the best
informant of what the QA/I professional does, the views of
senior managers and clinicians could have been informative.
Future work may wish to collect information on the QA/I
enterprise from multiple viewpoints. The probes used in this
study focused mostly on clarifying what the QA/I professionals did and how they perceived their roles, and less on
why they did what they did. Future work may wish to delve
into this issue more thoroughly.
Conclusions
This study found wide variation in how QA/I professionals
in mental health agencies approach their jobs. This likely
reflects the lack of standardization and training in this new,
growing, and demanding field of mental health practice.
QA/I work done well can likely make a difference in
agency practice and outcomes. QA/I work done poorly can
likely drive agency staff mad with requirements for
excessive monitoring for little gain. The fault of excessive
monitoring may not lie with the QA/I team, but with
external accreditors and regulators. While accreditation is
spurring the QA/I movement, the focus on meeting a large
number of accreditation and other regulatory standards
may deter in-depth QA/I efforts that truly improve identified problems. The QA/I role in mental health deserves
increased professional attention from researchers, academic institutions, and agency administrators.
Acknowledgments This research was supported by a grant from the
National Institute of Mental Health (P30 MH 068579).
Appendix
Initial questions used to elicit information about QA/I
activities, targets and contributions.
Please describe your work as a quality assurance
professional.
What are the main priorities of your work, the things
you focus attention on?
123
Adm Policy Ment Health (2008) 35:458–467
In general we are interested in things that quality
assurance professionals have done that have really made a
difference in terms of delivering quality services to patients
or consumers. Could you give us an example of something
that you have done that has made a difference?
What kinds of work or accomplishments would your
supervisor praise you for?
What kinds of activities did you do at work last week?
You may want to look at your work calendar to help you
answer these questions.
–
–
–
What projects were you working on?
Who did you meet with?
What kinds of routine tasks did you do?
References
Bellin, E., & Dubler, N. (2001). The quality improvement—Research
divide and the need for external oversight. American Journal of
Public Health, 91, 1512–1517.
Commission on the Accreditation of Rehabilitation Facilities International. (2008). Child and youth services standards manual,
Tuscon, AZ.
Council on Accreditation. (2008). Council on accreditation standards: Private organizations (8th Ed.). Available online at
www.COAstandards.org.
Crosby, P. B. (1979). Quality is free: The art of making quality
certain. New York: McGraw-Hill.
Deming, W. E. (1986). Out of the crisis. Cambridge, MA: MIT Center
for Advanced Engineering.
Donabedian, A. (2003). An introduction to quality assurance in health
care. Oxford: Oxford University Press.
Harry, M. J. (1988). The nature of six sigma quality. Schaumburg, IL:
Motorola University Press.
Hermann, R. C. (2005). Improving mental health care: A guide to
measurement-based quality improvement. Washington, DC:
American Psychiatric Press.
Hermann, R. C., Chan, J. A., Zazzali, J. L., & Lerner, D. (2006).
Aligning measurement-based quality improvement with implementation of evidence-based practices. Administration and
Policy in Mental Health and Mental Health Services Research,
33, 636–645. doi:10.1007/s10488-006-0055-1.
Institute of Medicine. (2006). Improving the quality of health care for
mental and substance-use conditions. Washington, DC: National
Academy of Sciences.
Joint Commission. (2008). Standards for behavioral health care.
Oakbrook Terrace, IL. Kuzel, AJ. Sampling in qualitative
research. In B. F. Crabtree & W. L. Miller (Eds.), Doing
qualitative research (pp. 33–45). Thousand Oaks, CA: Sage.
Kuzel, A. J. (1999). Sampling in qualitative research. In B. F.
Crabtree & W. L. Miller (Eds.), Doing qualitative research (2nd
ed., pp. 33–45). Newburty Park, CA: Sage Publications.
Martin, L. L. (1993). Total quality management in human service
organizations. Newbury Park, CA: Sage Publications.
Megivern, D. A., McMillen, J. C., Proctor, E. K., Striley, C. W.,
Cabassa, L. J., & Munson, M. R. (2007). Quality of care:
Expanding the social work dialogue. Social Work, 52, 115–124.
Miller, W. L., & Crabtree, B. F. (1999). Depth interviewing. In B. F.
Crabtree & W. L. Miller (Eds.), Doing qualitative research
(pp. 33–45). Thousand Oaks, CA: Sage.
Adm Policy Ment Health (2008) 35:458–467
Pande, P. S., Neuman, R. P., & Cavanagh, R. R. (2000). The six sigma
way: How GE, Motorola, and other top companies are honing
their performance. New York: McGraw-Hill.
Qualitative Solutions and Research [QSR]. (2006). NVivo 7.0.: Using
NVivo in qualitative research [Computer software and manual].
Melbourne, Australia: QSR International.
Shortell, S. M., Bennett, C. L., & Byck, G. R. (1998). Assessing the
impact of continuous quality improvement on clinical practice:
What it will take to accelerate progress. The Milbank Quarterly,
76, 593–624. doi:10.1111/1468-0009.00107.
467
Strauss, A., & Corbin, J. (1990). Basics of qualitative research:
Grounded theory procedures and techniques. Newbury Park,
CA: Sage.
Walton, M. (1990). Deming management at work. New York: Plenum.
Zirps, F., & Cassafer, D. J. (1996). Quality improvement in the
agency: What does it take? In P. Pecora, W. R. Seelig, F. Zirps,
& S. M. Davis (Eds.), Quality improvement and evaluation in
child and family services: Managing into the next century (pp.
145–174). Washington, DC: Child Welfare League of America.
123