A Framework For Assessing The Performance of Nonprofit Organizations
A Framework For Assessing The Performance of Nonprofit Organizations
A Framework For Assessing The Performance of Nonprofit Organizations
1-21
ª The Author(s) 2014
A Framework for Assessing Reprints and permission:
sagepub.com/journalsPermissions.nav
the Performance of Nonprofit DOI: 10.1177/1098214014545828
aje.sagepub.com
Organizations
Abstract
Performance measurement has gained increased importance in the nonprofit sector, and contem-
porary literature is populated with numerous performance measurement frameworks. In this article,
we seek to accomplish two goals. First, we review contemporary models of nonprofit performance
measurement to develop an integrated framework in order to identify directions for advancing the
study of performance measurement. Our analysis of this literature illuminates seven focal per-
spectives on nonprofit performance, each associated with a different tradition in performance
measurement. Second, we demonstrate the utility of this integrated framework for advancing theory
and scholarship by leveraging these seven perspectives to develop testable propositions aimed at
explaining variation across nonprofits in the adoption of different measurement approaches. By
better understanding how performance measurement is conceptualized within sector, the field will
be better positioned to both critique and expand upon normative approaches advanced in the lit-
erature as well as advance theory for predicting performance measurement decisions.
Keywords
performance measurement, performance assessment, performance evaluation, outcome, nonprofit
Introduction
Financial and competitive pressures within the nonprofit sector have led to an increased emphasis on
performance measurement. As a result of a growing emphasis on accountability in government fund-
ing, nonprofits are under increasing pressure to demonstrate excellence in performance in order to
secure financial resources similar to their public and private counterparts (Cairns, Harris, Hutchison, &
Tricker, 2005; Moxham, 2009a). In conjunction, the current economic climate has had an adverse
effect on the value of public donations made to the nonprofit sector (McLean & Brouwer, 2010),
suggesting that government grants and contracts will play a more prominent role in the general
financial portfolio of the nonprofit sector. As a result, the pressure for nonprofit organizations to
demonstrate their performance in order to secure continued funding is acute (Martikke, 2008).
1
Department of Public Administration, North Carolina State University, Raleigh, NC, USA
Corresponding Author:
Chongmyoung Lee, Department of Public Administration, North Carolina State University, Raleigh, NC 27695, USA.
Email: clee18@ncsu.edu
Method
A general review of the literature on nonprofit performance measurement revealed an impressive
array of scholarly efforts over the past decade. Some scholars have focused on understanding the
patterns of performance measurement use in nonprofits (e.g., Barman, 2007; MacIndoe & Barman,
2012; Poole, Nelson, Carnahan, Chepenik, & Tubiak, 2000). Other studies focus on specific tools in
facilitating nonprofit performance measurement such as the use of logic models (e.g., Hatry, Houten,
Plantz, & Taylor, 1996; Russ-Eft & Preskill, 2009; Valley of the Sun United Way, 2008; W.K.
Kellogg Foundation, 2004). Others focus on the substantive focus of performance measurement
in nonprofits. It is the latter that is the focus of this review. In this review, we considered contem-
porary nonprofit performance measurement frameworks, defined as frameworks present in the lit-
erature between 1990 and 2012. To identify an initial population of books and articles, electronic
searches of five bibliographic databases were conducted (1) Social Science Citation Index, (2)
Academic Search Premier, (3) Public Administration Abstracts, (4) JSTOR Arts and Sciences, and
(5) WorldCat. The search terms used included ‘‘performance evaluation,’’ ‘‘performance measure-
ment,’’ ‘‘performance appraisal,’’ ‘‘outcome measurement,’’ or ‘‘impact measurement’’ and ‘‘non-
profit.’’ To expedite the identification of relevant articles, only the articles that included one or
more of the search terms in the title or abstract were reviewed. Additional resources were identi-
fied with the assistance of knowledgeable colleagues in the field.
Each article was then reviewed to identify only those that contained an explicit performance mea-
surement framework for a nonprofit organization including a defined scope and constituting factors
of performance specific to the nonprofit sector. This review led to the identification of 18 distinct
nonprofit performance evaluation frameworks that were included for full analysis: (1) Bagnoli and
Megali (2011), (2) Beamon (1999), (3) Berman (2006), (4) Cutt and Murray (2000), (5) Greenway
(2001), (6) Herman and Renz (2008), (7) Kaplan (2001), (8) Lampkin et al. (2006), (9) Land (2001),
(10) Median-Borja and Triantis (2007), (11) Moore (2003), (12) Moxham (2009b), (13) Newcomer
(1997), (14) Penna (2011), (15) Poister (2003), (16) Sawhill andWilliamson (2001), (17) Sowa et al.
(2004), and (18) Talbot (2008).
We also reviewed related efforts such as studies of performance measurement focused on a pro-
gram or case level of analysis but not at the organizational level of analysis (e.g., Crook, Mullis,
Cornille, & Mullis, 2005; Hatry, Fisk, Hall, Schaenman, & Snyder, 2006; James, 2001; Poole,
Duvall, & Wofford, 2006; Sobelson & Young, 2013) as well as works that focus on a specific aspect
of nonprofit performance (e.g., Benjamin, 2012; Ebrahim & Rangan, 2010) such as Eckerd and
Moulton’s (2011) investigation of how role heterogeneity and external environment influence the
adoption and uses of performance measures in nonprofits. These related works did not propose a
specific framework of nonprofit performance measurement, which is the focus of this review. None-
theless, they offered valuable insights that helped to inform our analysis. As with any review meth-
odology, we recognize that our resulting sample may not be exhaustive of the population of
frameworks proposed between 1990 and 2012. However, we feel confident that the 18 frameworks
identified provide a representative picture of contemporary nonprofit performance measurement
perspectives in the United States.
In order to develop an integrated framework, each source was reviewed and the measurement
framework adopted by each author was identified. For each source, the dimensions and measures
of performance proposed were summarized in a table along with any available information on meth-
odology, background, and context related to the measurement framework. Last, we content analyzed
the frameworks in order to identify common themes in both evaluands and perspectives of nonprofit
performance adopted by each author. Based on the common perspectives, we constructed Table 1.
Each core perspective of nonprofit performance has several main concepts and associated measure-
ment indicators. In the subsequent section, we elaborate the core perspectives, concepts, and mea-
surement indicators summarized in Table 1.
Inputs: Bagnoli and Megali (2011), – The ability of a nonprofit to – Increase in revenue from
Beamon (1999); Cutt and Murray acquire necessary resources year to year;
(2000); Kaplan and Norton (financial and nonfinancial) – diversity of revenue streams;
(1996); Kendall and Knapp and efficiently use those – net surplus of financial
(2000); Median-Borja and resources to achieve reserves;
Triantis (2007); Moxham resiliency, growth, and – ability to acquire and manage
(2009b); and Newcomer (1997) long-term sustainability human resources (e.g.,
employees, volunteers);
– strength of the relationship
with resource providers
(e.g., funders, volunteers)
Organizational capacity: Kaplan – Consists of human and – Employee satisfaction;
(2001); Moore (2003); and Sowa, structural features that – employee motivation,
Selden, and Sandfort (2004) facilitate an organization’s retention, capabilities, and
ability to offer programs alignment;
and services – employee education/
counseling;
– staff and executive
perspective on operational
capabilities;
– operating performance
(cost, quality, and cycle
times) of critical processes;
– information system
capabilities;
– capacity to innovate
Outputs: Bagnoli and Megali (2011); – Entails a specification of the – Frequency and hours of
Berman (2006); Cutt and Murray scale, scope, and quality of services provided;
(2000); Kendall and Knapp products and services – on-time service deliveries;
(2000); Moxham (2009b); provided by the – achieved specified goals in
Newcomer (1997); Poister organization; relation to services;
(2003); and Sawhill and – focuses on organizational – number of participants
Williamson (2001) targets and activities that served;
have direct linkages to – client/customer response
organizational mission time;
accomplishment – quality of services provided
(physical and cultural
accessibility, timeliness,
courteousness, and physical
condition of facilities)
Outcomes: behavioral and – State of the target – Increased skills/knowledge
environmental changes; Bagnoli population or the condition (increase in skill, knowledge,
and Megali (2011); Berman that a program is intended learning, and readiness);
(2006); Greenway (2001); to affect; – improved condition/status
Lampkin et al. (2006); Moxham – focuses on the differences (participant social status,
(2009b); and Penna (2011) and benefits accomplished participant economic
through the organizational condition, and participant
activities health condition);
– modified behavior/attitude
(incidence of bad behavior,
incidence of desirable
activity, and maintenance of
new behavior)
Downloaded from aje.sagepub.com at FRESNO PACIFIC UNIV on January 1, 2015
(continued)
Lee and Nowell 5
Table 1. (continued)
create organizational capacity through the acquisition of resources that enhance their ability to offer
quality programs and services (Kaplan, 2001; Moore, 2003; Sowa, Selden, & Sandfort, 2004). Next,
nonprofits generate specific products and services by utilizing the capacity and acquired resources (Bag-
noli & Megali, 2011; Berman, 2006; Cutt & Murray, 2000; Kendall & Knapp, 2000; Moxham, 2009b;
Newcomer, 1997; Poister, 2003; Sawhill & Williamson, 2001).
Near-term outcomes that are generated as a consequence of a nonprofit’s activities have been
conceptualized and measured in two key ways. Some frameworks focus on the evaluation of beha-
vioral and environmental changes such as increased skills, improved condition, and modified beha-
vior (Bagnoli & Megali, 2011; Berman, 2006; Greenway, 2001; Lampkin et al., 2006; Moxham,
Inter-organizational
Networks
Client/Customer satisfaction
Institutional
Legitimacy
2009b; Penna, 2011). Other performance frameworks highlight customer satisfaction as an impor-
tant near-term outcome (Kaplan, 2001; Median-Borja & Triantis, 2007; Newcomer, 1997; Poister,
2003; Penna, 2011). Finally, several authors have emphasized the importance of considering the
public value that is created at the community level as a result of a nonprofit’s activities (Greenway,
2001; Hills & Sullivan, 2006; Lampkin et al., 2006; Land, 2001; Moore, 2003; Penna, 2011). Last,
some frameworks have highlighted how well the organization has managed relationships with key
stakeholders and gained legitimacy in their field and community as a critical aspect of nonprofit per-
formance (Bagnoli & Megali, 2011; Herman & Renz, 2008; Moore, 2003; Talbot, 2008).
In this article, we describe these dimensions as core perspectives of nonprofits performance. As
illustrated in Figure 1, it is important to note that these perspectives do not operate independently of
one another. Performance in the areas of resource acquisition and organizational capacity has down-
stream consequences for what a nonprofit is able to accomplish in terms of outcomes and public
value. In the same way, organizational outcomes can subsequently influence the acquisition of new
resources. In the subsequent section, we will discuss each of these perspectives in greater detail
including key operationalizations and indicators suggested by the current literature.
Input
The public and nonprofit sector has been dominated by a result-oriented performance perspective,
arguably to the neglect of other complementary perspectives. In response to this, several scholars
have proposed performance evaluation frameworks which take into account that nonprofit organi-
zations are working under the constraint of budget and resources (e.g., Moxham, 2009b). Thus, mea-
suring how the inputs have been acquired and utilized is argued by some as a key dimension of
nonprofit performance. The main concepts dominating this perspective are resource acquisition
and utilization (Beamon, 1999; Kaplan & Norton, 1996; Kendall & Knapp, 2000; Median-Borja &
Triantis, 2007) and expenditure (Cutt & Murray, 2000; Moxham, 2009b; Newcomer, 1997).
Resource acquisition and utilization focuses on how well nonprofits are able to acquire resources
in order to generate value. Two main approaches have been advocated for in this domain. The first
approach focuses on the acquisition and utilization of money, facilities, equipment, staffing and
training, and the preparation of programs and services (Berman, 2006; Median-Borja & Triantis,
2007). For example, in the three-part framework of Beamon (1999), resource performance metrics
measure the level of resources used to meet the system’s objectives. Examples of resource perfor-
mance metrics include the number of employees/volunteers (or hours) required for an activity,
increase in revenue from year to year, diversity of revenue streams, and net surplus of financial
reserves. Also, the financial perspective in Kaplan’s Balanced Scorecard for Nonprofits framework
is used to examine how the resources have been used to develop necessary internal processes
(Kaplan & Norton, 1996). In addition to focusing on successful resource acquisition, scholars also
emphasize that nonprofits need to make wise use of the resources they have (Median-Borja & Triantis,
2007). Accordingly, a second approach for measuring nonprofit performance advocated in the litera-
ture is through an emphasis on expenditures (Cutt & Murray, 2000; Moxham, 2009b; Newcomer,
1997). Expenditure-focused measurement is commonly used in the public and nonprofit area in grants
and contract management and is an integral part of nonprofit performance evaluation frameworks
posed by Moxham (2009b). A common approach to examining nonprofit performance through expen-
ditures is ‘‘end of funding cycle’’ reports that compare expenditures to outputs with an eye toward
evaluating the efficiency of organizational activities (Moxham, 2009b).
Organizational Capacity
Several scholars have advocated the importance of including organizational capacity development
in the evaluation of nonprofits (Kaplan, 2001; Moore, 2003; Sowa et al., 2004). This dimension is
strongly connected with the input perspective. Although the input perspective focuses primarily on
resource acquisition and utilization, the organizational capacity perspective focuses on developing
the capability to generate outputs/outcomes effectively. That is to say, this perspective is about eval-
uating how well a nonprofit has constructed effective internal processes and structures to use the
resources efficiently and effectively toward the advancement of the organization’s mission. It also
includes developing the requisite capacity to deliver the services, adopt necessary innovations, and
expand/alter programs and operations to meet changing needs (Kaplan, 2001). The justification for
adopting an organizational capacity perspective is based on the premise that processes of innovation,
adaptation, and learning strongly influence organizational activities and overall performance. Exam-
ple indicators of high performance for this perspective include the employee education/counseling,
employee satisfaction, staff and executive perspective on the operational capabilities, and capacity
to innovate. There are several key concepts represented within this perspective. First, scholars high-
light the need for nonprofits to continually work to streamline and improve internal processes that
deliver value to customers and reduce operating expenses. For instance, abandoning an unnecessary
service provision procedure can be one way to reduce operating expenses (Sowa et al., 2004). Sec-
ond, scholars have emphasized the importance of considering a nonprofit’s performance in strength-
ening their organizational capacity for learning, innovation, and growth. This focus includes the
development of innovations that create entirely new processes, services, or products as well as mak-
ing quality improvements to people and systems within the organization. For example, in the
balanced scorecard, Kaplan (2001) focuses on quality improvements to people and systems within
the organization. Relevant indicators identified by Kaplan (2001, p. 357) include ‘‘employee moti-
vation, retention, capabilities, and alignment, as well as information system capabilities.’’ Moore
(2003, p. 18) also mentions the importance of building operational capacity through learning and
innovation. He argues that through such processes, organizations can enhance the ‘‘the technologies
that convert inputs into outputs, and outputs into satisfied clients and desired outcomes.’’ In the long
term, the performance of nonprofits will depend on ‘‘the rate at which it can learn to improve its
operations as well as continue to carry them out’’ (Moore, 2003, p. 22). Further, Kaplan (2001,
p. 366) highlights the innovative capacity of an organization as the extent to which ‘‘the boards,
investors, fund managers, foundations, and social entrepreneurs bring all their resources to bear
in the right ways to strategic applications.’’
Last, among the dimensions of MIMNOE, Sowa et al. (2004) encourage a focus on organizational
capacity in terms of management and program capacity. By this, they advocate that organizations
evaluate both how well programs are designed and operated and whether they are perceived by
employees, managers, and clients as being designed and operated appropriately. Although outcome
measures are often chosen to represent organizational performance, there is a complex mechanism
hidden behind the outcome. To improve the performance, organizations should understand how the
management capacity and program capacity influences the resulting outcomes (Sowa et al., 2004).
Output
A focus on outputs represents another perspective for conceptualizing the performance of nonpro-
fits. Outputs refer to the countable goods and services obtained by means of the nonprofit activities
and direct products of activities for achieving the mission (Bagnoli & Megali, 2011; Berman, 2006;
Cutt & Murray, 2000; Kendall & Knapp, 2000; Poister, 2003). This perspective focuses on the tar-
get/activities of the organization, generally as these relate and contribute to mission accomplishment
(Sawhill & Williamson, 2001).
Output measurement addresses whether nonprofit activities achieved the specific targets initially
intended (Moxham, 2009b). This approach involves highlighting the physical products of the activ-
ities carried out by nonprofits as a valuation (or quantitative accounting) of outputs. Thus, they are
generally quantitative and include criteria such as the number of people who have been served and
the number of services that have been offered (Moxham, 2009b). Such information can be analyzed
in relationship with input perspective like relative production costs in order to measure efficiency
and productivity (Bagnoli & Megali, 2011). Output measures are recognized as valuable in a perfor-
mance measurement portfolio, as they generally have the advantage of being easier and cheaper to
monitor than other types of outcomes. However, they are also known to be powerful drivers of beha-
vior within the organization and therefore need to be considered carefully. Consequently, scholars
(e.g., Sawhill & Williamson, 2001) emphasize the importance of output measures that have clear
linkages to the organizational mission to avoid goal displacement.
measuring of public value including consensus conferences, citizen’s juries/panels, attitudinal sur-
veys, opinion polling, and the public value measurement framework developed by The Work Foun-
dation. Moulton and Eckerd (2012) categorize the public value of nonprofits into the following six
dimensions: (1) service delivery, (2) innovation, (3) advocacy, (4) individual expression, (5) social
capital creation, and (6) citizen engagement. These authors suggest survey items to assess each
dimension of public value of nonprofits. Similarly, among the four outcome types of Lampkin
et al. (2006), community-centered outcomes can be included in this perspective. Although the other
dimensions in the Lampkin’s framework—program centered outcomes and participant centered out-
comes—are restricted in the organizational level, community-centered outcomes are more oriented
to the public value and community value. They suggest community-centered outcomes can be mea-
sured through citizen panel evaluations and opinion polls.
Network/Institutional Legitimacy
Finally, networks and institutional legitimacy may be considered as a key component of a nonpro-
fit’s performance framework. Within this perspective, performance is conceptualized in terms of
how an organization has managed its relations with other stakeholders and established a reputation
for trustworthiness and excellence within this broader network. As shown in Figure 1, network and
legitimacy issue can be examined across all nonprofits’ activities like acquiring and utilizing
resources, developing organizational capacity, and creating value for the customers and community.
This perspective can be broken down further into three core areas of focus. First, our review
of contemporary frameworks of nonprofit performance revealed an increased emphasis on under-
standing nonprofits’ effectiveness from the perspective of interorganizational networks and
network-level effectiveness. That is because the effectiveness of an organization often depends on
the ‘‘effectiveness of other organizations and people with which it is interconnected and the ways
in which they are interconnected’’ (Herman & Renz, 2008, p. 409). Also, networks are of particular
importance to nonprofits these days due to the government incentives for collaboration to reduce
costs and duplication of efforts (Guo & Acar, 2005; Sharfman, Gray, &Yan, 1991). Therefore, the
working relationship with other organizations should be considered as one of main perspectives of
nonprofit performance. Nonprofit organizations can advance their missions by collaborating with
other organizations who have the same goals or who have resources that they can utilize (Moore,
2003). This kind of interorganizational network/collaboration can be measured through the success-
ful partnership cases and stakeholder/board survey on the partnership. Consistent with this, several
scholars (e.g., Herman & Renz, 2008; Moore, 2003) have advocated for examining the network
partnerships as a critical aspect of nonprofit performance.
Second, managing relations with other funders/volunteers/stakeholders was identified as key
area of focus in the network perspective. For example, Moore (2003) emphasized nonprofit efficacy
at expanding support and authorization as an aspect of nonprofit performance. By this, he was refer-
ring to funder relations and diversification, legitimacy with general public, relations with govern-
ment regulators, reputation on the media, and the credibility with civil society actors. ‘‘If
nonprofit managers are to keep their attention focused on both the overall success and sustainability
of their strategy, they have to develop and use measures that monitor the strength of their relation-
ship with financial supporters, and public legitimaters and authorizers as well as those that record
their impact on the world’’ (Moore, 2003, p. 16). In this sense, funder/stakeholder survey or images
of the organization on the mass media are presented as indicators to measure this perspective.
Finally, institutional legitimacy as an aspect of nonprofit performance focuses on verifying that a
nonprofit has respected ‘‘its self imposed rules (statute, mission, program of action) and the legal
norms applicable to its institutional formula’’ (Bagnoli & Megali, 2011, p. 158). For the legitimacy
dimension, Bagnoli & Megali (2011, p. 162) insist the following aspects should be considered: ‘‘1)
institutional coherence, thus the coherence of activities with the stated mission; 2) compliance with
general and particular laws applicable; and 3) compliance with secondary norms.’’ Talbot (2008,
p. 4) also emphasizes this concept in his framework. The legitimacy justifies ‘‘the raising of public
funds to carry out collective action projects that the market would not provide.’’ This perspective can
be measured through checking whether the organizations have conformed to the institutional norms
and laws in their working environment.
In summary, findings from this review reveal of portrait of nonprofit performance that encom-
passes seven core perspectives. The first perspective emphasizes how important it is that nonprofits
be adept at acquiring, managing, and utilizing resources in light of increasingly challenging envir-
onments. In the second perspective, nonprofit performance is viewed through the lens of achieve-
ments in strengthening the internal capacity of the organization. In the third and fourth,
traditional program evaluation considerations of organizational outputs and outcomes are high-
lighted. Although these perspectives have been central to performance measurement for some time,
contemporary frameworks have grown in sophistication. Scholars and practitioners now have access
to a range of alternative designs such as behavioral, environmental, and client satisfaction
approaches to outcome measurement. In the public value perspective of nonprofit performance,
scholars urge the field to not abandon efforts to assess the substantive value that nonprofits achieve
for society as a whole (Bagnoli & Megali, 2011; Moore, 2003). As stated by Bagnoli and Megali
(2011, p. 156), nonprofits must continue to consider the degree to which their activities have ‘‘con-
tributed to the well-being of the intended beneficiaries and also has contributed to community-wide
goals.’’ Last, contemporary frameworks of nonprofit performance provide a framework for adopting
an ecological view of nonprofits, viewing them as embedded in a complex array of stakeholder rela-
tionships and institutional fields. As the social problems nonprofits confront become more complex
and beyond the scope of any single organizational solution, interorganizational networks are likely
to become even more significant as a mechanism for problem solving. Given this increasing impor-
tance of collaboration in the development of new solutions to complex social problems, network
management and institutional legitimacy are likely to become even more prominent among the crit-
ical perspectives of nonprofit performance.
strategic management tradition. In this perspective, the focus is intraorganizational and performance
measurement is viewed as a managerial tool for improving work processes toward the accomplish-
ment of a predefined set of organizational goals (e.g., Kaplan & Norton, 1996; Moynihan, 2005; Pet-
tigrew, Woodman, & Cameron, 2001; Poister, 2010; Russ-Eft & Preskill, 2009; Skinner, 2004). In a
second perspective, performance measurement is viewed from extraorganizational standpoint as a
symbolic activity aimed at enhancing legitimacy and credibility in the eyes of key organizational
stakeholders such as potential clients, donors, funders, and policy makers (e.g., Modell, 2001 &
2009; Moynihan, 2005; Roy & Seguin, 2000; Yang, 2009).
There is ongoing, spirited debate concerning institutional versus operational explanations for why
organizations do what they do (e.g., Bielefeld, 1992; Guo & Acar, 2005; Hill & Lynn, 2003; Moy-
nihan, 2005; Sowa, 2009; Williams, Fedorowicz, & Tomasino, 2010). The same can be true for per-
formance measurement approaches adopted by nonprofits. Resource dependency and institutional
theorists reject the notion that organizational actions such as the adoption of a given performance
measurement approach need to have anything to do with improving work processes. Although
operational improvement is one possible motivation, this body of scholarship argues it is not the
most likely explanation. Rather, these theorists argue that organizations are primarily concerned
with structurally adjusting themselves to manage relational dependencies and conform to institu-
tional norms in order to achieve legitimacy (DiMaggio & Powell, 1983; Meyer & Rowan, 1977;
Roy & Seguin, 2000; Scott, 1995; Sowa, 2009). The organizational mandate to effectively manage
external relations is so paramount that Pfeffer and Salancik (2003) have argued that managerial
actions are often best understood in terms of their symbolic importance rather than operational
value. Performance measurement information is frequently used as part of the public narrative of
a nonprofit, helping the organization to tell its story in reports and other media. In light of this,
we suggest that performance measurement is one such managerial domain which is shaped most
strongly by external pressures.
Funding Type
Nonprofit organizations have long been characterized as operating in resource-scarce environments
and subsequently subject to considerable influence by current and potential funders (Bielefeld, 1992;
Chang & Tuckman, 1994; Froelich, 1999; Hodge & Piccolo, 2005; Pfeffer & Slancik, 2003).
Resource dependency theorists argue that resources are the basis of organization’s strategy, struc-
ture, and survival (Aldrich, 1976; Hodge & Piccolo, 2005; Pfeffer & Slancik, 2003). Understanding
the symbolic importance of performance measurement to external stakeholders (Proposition 1) pro-
vides a foundation to consider how nonprofits may tailor performance measurement portfolios in
accordance with their funding model. There has been significant scholarship aimed at understanding
the implications of different funding models for nonprofit design and performance (e.g., Crittenden,
2000; Carroll & Stater, 2009; Hodge & Piccolo, 2005; Kara, Spillan, & DeShields, 2004); however,
to date, there has been no scholarship aimed at understanding how funding models may influence the
adoption of different performance measurement approaches.
Nonprofit funding comes from five primary sources: (1) private contributions of individ-
uals (e.g., membership fees, donations), (2) corporate donors, (3) government grants and con-
tracts, (4) commercial enterprise (e.g., concession sales), and (5) foundation grants and
contracts (Bielefeld, 1992; Chang & Tuckman, 1994; Froelich, 1999; Gronbjerg, 1993;
Hodge, & Piccolo, 2005). Each funding source suggests a different audience and consumer
of performance measurement information.
Consideration of the unique interests, priorities, and values of these different audiences sug-
gests interesting areas for future research concerning their influence on performance measurement
designs. For example, commercial funding models have been described as strongly influenced by
the values of the marketplace and its emphasis on near-term efficiency in the value creation pro-
cess (Froelich, 1999; Peterson, 1986). As stewards of public funds, government funding models
are more likely to be shaped by public sector values of accountability and equity in the outputs
and outcomes of the organization (Chaves, Stephens, & Galaskiewicz, 2004; Froelich, 1999;
O’Regan & Oster, 2002; Peterson, 1986). Corporate philanthropy can serve as a key strategy for
helping a corporation garner greater loyalty and goodwill within its local community and broader
marketplace (DiMaggio, 1986; Froelich, 1999; Kelly, 1998; Powell & Friedkin, 1986). Accord-
ingly, visibility of the outcomes of its philanthropic efforts is an important marketing asset. Simi-
larly, private donors are another market in which nonprofits compete for attention and popularity,
making the ability to effectively communicate impact a critical capability. These suggest the fol-
lowing propositions:
However, nonprofits as a population are more likely to pursue social missions that are character-
istically nonprogrammable (Moore, 2003). Nonprogrammable tasks are those in which the inputs,
appropriate actions, and outputs can vary dramatically. An example of a nonprogrammable task
would be crisis counseling in which what constitutes appropriate action varies hour by hour based
on the client who walks in the door, what they need, and what resources they may have to draw upon.
In nonprogrammable tasks that are easily observable, there is greater opportunity for process
improvement through direct observation approaches to organizational learning. However, in cases
where direct observation is either problematic (such as in the crisis counseling situation) or simply
too costly and the task is nonprogrammable, performance measurement of the value creation process
becomes more difficult. In these cases, nonprofits may focus more on monitoring internal capacity,
client satisfaction, and institutional legitimacy within their field as indicators of high performance.
Environmental Turbulence
Task programmability can be closely related to the degree of environmental stability. In highly sta-
ble operating environments, it becomes much easier to develop clear and unambiguous technologies
for accomplishing one’s goals. However, in turbulent and dynamic environments, nonprofits face
the additional challenge of sustaining themselves amid ever-changing fluctuations in funding, the
political and policy environment, human resource availability, relevant technologies, and public
priorities. Although the relative turbulence versus stability of nonprofit environments is an area that
has received significant scholarly interest (e.g.Andrews, 2008; Budding, 2004; Hoque, 2004), lim-
ited work has been done on understanding the implication of environmental turbulence on perfor-
mance measurement systems. However, there is ample theoretical basis to suggest an association
between environmental turbulence and managerial choice in the adoption of performance measure-
ment systems may exist. Nonprofits operating in volatile environments must have greater capacity
to (1) buffer their organization from external shocks (Pfeffer & Salancik, 2003; Rethemeyer &
Hatmaker, 2008), (2) adapt their operations to new opportunities (Bryson, Crosby, & Stone,
2006; Guo & Acar, 2005; Larson, 1992), and (3) maintain relevancy within their domain despite
shifting priorities (DiMaggio & Powell, 1983). In these cases, internal capacity, interorganiza-
tional networks, and institutional legitimacy may be particularly critical to organizational survival
as each serves as a form of capital that can aid the organization during times of upheaval. For
example, interorganizational networks are commonly viewed as organizational resources for gain-
ing information that can give early notice of upcoming changes (Rethemeyer & Hatmaker, 2008;
Pfeffer & Salancik, 2003). Further, there is evidence that organizations can draw upon its network
to identify collaborators to enable the organization to take advantage of new opportunities that it
would not be able to pursue if it were working alone (Bryson et al., 2006; Guo & Acar, 2005;
Larson, 1992). Institutional legitimacy is another critical form of capital that can ensure the non-
profit is able to maintain competitiveness for funding even during times of economic downturn or
shifting political priorities when resources may become scarcer (DiMaggio & Powell, 1983).
Given that we can expect these capitals to be most critical to those organizations operating in tur-
bulent environments, we propose that:
Conclusion
This study synthesizes the varied perspectives advocated by scholars to present an integrated frame-
work of nonprofit performance. Such a review highlights that there is more than one legitimate way
to conceptualize performance. By mapping of these perspectives to the value creation process of
nonprofits, this integrated framework offers scholars and practitioners a more holistic framework
within which to position and critically assess the various normative arguments in the literature.
This review and integrated framework also has important implications for the practice of nonpro-
fit performance measurement: (1) it provides a starting point for new nonprofits to design a perfor-
mance measurement system; (2) for nonprofits that already have performance measurement system,
the framework allows them to critically consider their own performance evaluation approaches
within this broader array of possible approaches; (3) to the extent that nonprofits use such common
measures, the framework can facilitate benchmarking and cross-program comparisons (Lampkin
et al., 2006); and (4) for stakeholders, this integrated framework provides a guideline to assess their
organizations’ performance or performance measurement system.
Further, given the diversity of normative perspectives present in the current literature, greater
integration is necessary for systematically advancing theory and research of nonprofit performance
measurement adoption and use. For example, building on neoinstitutional perspectives of organiza-
tions, we suggest that, despite the common assumption that performance measurement is motivated
by internal needs to improve operational functioning and efficiency, institutional pressures and the
need for external legitimacy are likely to have a stronger influence on what nonprofits actually mea-
sure. Further, we propose that variation in the nonprofits funding and operating environment will
increase the likelihood of certain perspectives being prioritized over others. In doing so, we demon-
strate how an integrated framework can play a critical role in identifying directions for future
research such as how different approaches to performance evaluation are understood, adopted, and
used within the sector. By better understanding how performance measurement is conceptualized
within sector, the field will be better positioned to both critique and expand upon normative
approaches advanced in the literature as well as advance theory for predicting performance measure-
ment decisions.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
References
Aldrich, H. (1976). Resource dependence and interorganizational relations local employment service offices
and social services sector organizations. Administration & Society, 7, 419–454.
Ammons, D. N., & Rivenbark, W. C. (2008). Factors influencing the use of performance data to improve
municipal services: Evidence from the North Carolina benchmarking project. Public Administration
Review, 68, 304–318.
Andrews, R. (2008). Perceived environmental uncertainty in public organizations: An empirical exploration.
Public Performance & Management Review, 32, 25–50.
Anheier, H. K. (2009). What kind of nonprofit sector, what kind of society? Comparative policy reflections.
American Behavioral Scientist, 52, 1082–1094.
Bagnoli, L., & Megali, C. (2011). Measuring performance in social enterprises. Nonprofit and Voluntary Sector
Quarterly, 40, 149–165.
Barman, E. (2007). What is the bottom line for nonprofit organizations? A history of measurement in the British
voluntary sector. Voluntas: International Journal of Voluntary and Nonprofit Organizations, 18, 101–115.
Baruch, Y., & Ramalho, N. (2006). Communalities and distinctions in the measurement of organizational per-
formance and effectiveness. Nonprofit and Voluntary Sector Quarterly, 35, 39–65.
Beamon, B. M. (1999). Measuring supply chain performance. International Journal of Operations & Produc-
tion Management, 19, 275–292.
Behn, R. D. (1995). The big questions of public management. Public Administration Review, 55, 313–324.
Benjamin, L. M. (2012). Nonprofit organizations and outcome measurement: From tracking program activities
to focusing on frontline work. American Journal of Evaluation, 33, 431–447.
Berman, E. M. (2006). Performance and productivity in public and nonprofit organizations (2nd ed.). Armonk,
NY: M.E. Sharpe.
Bielefeld, W. (1992). Non-profit-funding environment relations: Theory and application. Voluntas: Interna-
tional Journal of Voluntary and Nonprofit Organizations, 3, 48–70.
Bozeman, B. (1984). Dimensions of ‘‘publicness:’’ An approach to public organization theory. New Directions
in Public Administration (Editörler B. Bozeman ve J. Straussman). Monterey, CA: Brooks/Cole.
Bryson, J. M. (2011). Strategic planning for public and nonprofit organizations: A guide to strengthening and
sustaining organizational achievement (Vol. 1). San Francisco, CA: John Wiley.
Bryson, J. M., Crosby, B. C., & Stone, M. M. (2006). The design and implementation of cross-sector collabora-
tions: Propositions from the literature. Public administration review, 66, 44–55.
Budding, G. T. (2004). Accountability, environmental uncertainty and government performance: Evidence
from Dutch municipalities. Management Accounting Research, 15, 285–304.
Burns, T., & Stalker, G. M. (1961). The management of innovation. University of Illinois at Urbana-Cham-
paign’s Academy for Entrepreneurial Leadership Historical Research Reference in Entrepreneurship.
Cairns, B., Harris, M., Hutchison, R., & Tricker, M. (2005). Improving performance? The adoption and imple-
mentation of quality systems in UK nonprofits. Nonprofit management & leadership, 16, 135–151.
Campbell, D. A., Lambright, K. T., & Bronstein, L. R. (2012). In the eyes of the beholders: Feedback motiva-
tions and practices among nonprofit providers and their funders. Public Performance & Management
Review, 36, 7–30.
Carman, J. G. (2007). Evaluation practice among community-based organizations research into the reality.
American Journal of Evaluation, 28, 60–75.
Carman, J. G., & Fredericks, K. A. (2008). Nonprofits and evaluation: Empirical evidence from the field. In J.
G. Carman & K. A. Fredericks (Eds.), Nonprofits and evaluation: New Directions for Evaluation, 119,
51–71.
Carroll, D. A., & Stater, K. J. (2009). Revenue Diversification in nonprofit organizations: Does it lead to finan-
cial stability? Journal of Public Administration Research and Theory, 19, 947–966.
Chang, C. F., & Tuckman, H. P. (1994). Revenue diversification among non-profits. VOLUNTAS: International
Journal of Voluntary and Nonprofit Organizations, 5, 273–290.
Chaves, M., Stephens, L., & Galaskiewicz, J. (2004). Does government funding suppress nonprofits’ political
activity? American Sociological Review, 69, 292–316.
Crittenden, W. F. (2000). Sprinning straw into gold: The tenuous strategy, funding, and financial performance
linkage. Nonprofit and Voluntary Sector Quarterly, 29, 164–182.
Crook, W. P., Mullis, R. L., Cornille, T. A., & Mullis, A. K. (2005). Outcome measurement in homeless systems
of care. Evaluation and Program Planning, 28, 379–390.
Cutt, J., & Murray, V. (2000). Accountability and effectiveness evaluation in nonprofit. London, England:
Routledge.
Dart, R. (2004). Being ‘‘business-like’’ in a nonprofit organization: A grounded and inductive typology. Non-
profit and Voluntary Sector Quarterly, 33, 290–310.
DiMaggio, P. J. (1986). Can culture survive the marketplace? In P. DiMaggio (Ed.), Nonprofit enterprise and
the arts (pp. 65–93). New York, NY: Oxford University Press.
DiMaggio, P., & Powell, W. W. (1983). The iron cage revisited: Institutional isomorphism and collective
rationality in organizational fields. American sociological review, 48, 147–160.
Drucker, P. F. (2010). Managing the non-profit organization: Principles and practices. Newyork, NY:
HarperCollins.
Ebrahim, A., & Rangan, K. (2010). The limits of nonprofit impact: A contingency framework for measuring
social performance (Working paper, no. 10-099). Boston, MA: Harvard Business School.
Eckerd, A., & Moulton, S. (2011). Heterogeneous roles and heterogeneous practices: Understanding the adop-
tion and uses of nonprofit performance evaluations. American Journal of Evaluation, 32, 98–117.
Eisenhardt, K. M. (1989). Agency theory: An assessment and review. Academy of management review, 14,
57–74.
Forbes, D. P. (1998). Measuring the unmeasurable: Empirical studies of nonprofit organization effectiveness
from 1977 to 1997. Nonprofit and voluntary sector quarterly, 27, 183–202.
Froelich, K. A. (1999). Diversification of revenue strategies: Evolving resource dependence in nonprofit orga-
nizations. Nonprofit and voluntary sector quarterly, 28, 246–268.
Frumkin, P. (2002). On being nonprofit: A conceptual and policy primer. Cambridge, MA: Harvard University
Press.
Greenway, M. T. (2001). The emerging status of outcome measurement in the nonprofit human service sector.
In P. Flynn & V. A. Hodgkinson (Eds.), Measuring the impact of the nonprofit sector. New York, NY:
Kluwer Academic/Plenum Publishers.
Gresov, C. (1989). Exploring fit and misfit with multiple contingencies. Administrative Science Quarterly, 34,
431–453.
Gronbjerg, K. A. (1993). Understanding nonprofit funding. San Francisco, CA: Jossey-Bass.
Gronbjerg, K. A. (2001). Forward. In J. S. Ott (Ed.), The nature of the nonprofit sector. Boulder, CO:
Westview.
Guo, C., & Acar, M. (2005). Understanding collaboration among nonprofit organizations: Combining resource
dependency, institutional, and network perspectives. Nonprofit and Voluntary Sector Quarterly, 34,
340–361.
Hatry, H. P, Houten, T. V., Plantz, M. C., & Taylor, M. (1996). Measuring program outcomes: A practical
approach. Alexandria, VA: United Way of America.
Hatry, H. P., Fisk, D. M., Hall, J. R., Jr., Schaenman, P. S., & Snyder, L. (2006). How effective are your com-
munity services? Procedures for measuring their quality (3rd ed.). Washington, DC: The Urban Institute and
International City/County Manager Association.
Herman, R. D., & Renz, D. O. (2008). Advancing nonprofit organizational effectiveness research and theory
nine theses. Nonprofit Management & Leadership, 18, 399–415.
Hill, C., & Lynn, L. (2003). Producing human services why do agencies collaborate? Public Management
Review, 5, 63–81.
Hills, D., & Sullivan, F. (2006). Measuring public value 2: Practical approach. London, England: The Work
Foundation.
Hodge, M. M., & Piccolo, R. F. (2005). Funding source, board involvement techniques, and financial vulnerability in
nonprofit organizations: A test of resource dependence. Nonprofit Management and Leadership, 16, 171–190.
Hoque, Z. (2004). A contingency model of the association between strategy, environmental uncertainty and perfor-
mance measurement: Impact on organizational performance. International Business Review, 13, 485–502.
James, B. (2001). A performance monitoring framework for conservation advocacy. Department of Conserva-
tion Technical Series 25. Wellington, New Zealand: Department of Conservation.
Julnes, P. D. L., & Holzer, M. (2001). Promoting the utilization of performance measures in public organiza-
tions: An empirical study of factors affecting adoption and implementation. Public Administration Review,
61, 693–708.
Kanter, R. M., & Summers, D. V. (1994). Doing well while doing good: Dilemmas of performance measure-
ment in nonprofit organizations and the need for a multiple-constituency approach (pp. 220–236). London,
United Kingdom: Sage.
Kaplan, R. S. (2001). Strategic performance measurement and management in nonprofit organizations. Nonpro-
fit Management & Leadership, 11, 353–370.
Kaplan, R. S., & Norton, D. P. (1996). The balanced scorecard: Translating strategy into action. Boston, MA:
Harvard Business School Press.
Kara, A., Spillan, J. E., & DeShields, Jr., O. W. (2004). An empirical investigation of the link between market
orientation and business performance in nonprofit service providers. Journal of Marketing Theory and Prac-
tice, 12, 59–72.
Kelly, K. S. (1998). Effective fund-raising management. Mahwah, NJ: Lawrence Erlbaum.
Kendall, J., & Knapp, M. (2000). Measuring the performance of voluntary organizations. Public Management:
An International Journal of Research and Theory, 2, 105–132.
Lambright, K. T. (2009). Agency theory and beyond: Contracted providers’ motivations to properly use service
monitoring tools. Journal of Public Administration Research and Theory, 19, 207–227.
Lampkin, L. M., Winkler, M. K., Kerlin, J., Harry, H. P., Natenshon, D., Saul, J., Melkers, J., & Sheshadri, A.
(2006). Building a common outcome framework to measure nonprofit performance. Washington, DC: The
Urban Institute. Retrieved from http://www.urban.org/publications/411404.html
Land, K. C. (2001). Social indicators for assessing the impact of the independent, non-for-profit sector of soci-
ety. In P. Flynn & V. A. Hodgkinson (Eds.), Measuring the impact of the nonprofit sector. New York, NY:
Kluwer Academic/Plenum Publishers.
Larson, A. (1992). Network dyads in entrepreneurial settings: A study of the governance of exchange relation-
ships. Administrative Science Quarterly, 37, 76–104.
LeRoux, K., & Wright, N. S. (2010). Does performance measurement improve strategic decision making? Find-
ings from a national survey of nonprofit social service agencies. Nonprofit and Voluntary Sector Quarterly,
39, 571–587.
MacIndoe, H., & Barman, E. (2012). How organizational stakeholders shape performance measurement in non-
profits: Exploring a multidimensional measure. Nonprofit and Voluntary Sector Quarterly, Advance online
publication. doi: 10.1177/0899764012444351.
Martikke, S. (2008). Commissioning: Possible–Greater Manchester VCS organizations’ experiences in public
sector commissioning. Manchester: Greater Manchester Centre for Voluntary Organization.
McLean, C., & Brouwer, C. (2010). The effect of the economy on the nonprofit sector: A june 2010 survey.
Retrieved March 13, 2013, from http://www.guidestar.org/ViewCmsFile.aspx?ContentID¼2963
Median-Borja, A., & Triantis, K. (2007). A conceptual framework to evaluate performance of nonprofit social
service organizations. International Journal of Technology Management, 37, 147–161.
Meyer, J. W., & Rowan, B. (1977). Institutionalized organizations: Formal structure as myth and ceremony.
American journal of sociology, 83, 340–363.
Modell, S. (2001). Performance measurement and institutional processes: a study of managerial responses to
public sector reform. Management Accounting Research, 12, 437–464.
Modell, S. (2009). Institutional research on performance measurement and management in the public sector
accounting literature: a review and assessment. Financial Accountability & Management, 25, 277–303.
Moore, M. H. (2000). Managing for value: Organizational strategy in for-profit, nonprofit, and governmental
organizations. Nonprofit and Voluntary Sector Quarterly, 29, 183–208.
Moore, M. H. (2003). The ‘public value scorecard’: A rejoinder and an alternative to ‘strategic performance
measurement and management in non-profit organizations’ by Robert Kaplan (Hauser Center for Nonprofit
Organizations Working Paper, no. 18). Boston, MA: Hauser Center for Nonprofit Organizations, Harvard
University.
Moulton, S., & Eckerd, A. (2012). Preserving the publicness of the nonprofit sector resources, roles, and public
values. Nonprofit and Voluntary Sector Quarterly, 41, 656–685.
Moxham, C. (2009a). Performance measurement: Examining the applicability of the existing body of knowl-
edge to nonprofit organizations. International Journal of Operations & Production Management, 29,
740–763.
Moxham, C. (2009b). Quality or quantity? Examining the role of performance measurement in nonprofit orga-
nizations in the UK. Paper presented at the16th International European Operations Management Association
Conference, Goteborg, Sweden.
Moynihan, D. P. (2005). Why and how do state governments adopt and implement ‘‘Managing for Results’’
reforms? Journal of Public Administration Research and Theory, 15, 219–243.
Moynihan, D. P., & Pandey, S. K. (2010). The big question for performance management: why do managers use
performance information? Journal of Public Administration Research and Theory, 20, 849–866.
Moynihan, D. P., Pandey, S. K., & Wright, B. E. (2012). Setting the table: How transformational leadership
fosters performance information use. Journal of Public Administration Research and Theory, 22, 143–164.
Newcomer, K. (1997). Using performance measurement to improve programs. In K. Newcomer (Ed.), Using
performance measurement to improve public and nonprofit programs. San Francisco, CA: Jossey-Bass.
O’Regan, K., & Oster, S. (2002). Does government funding alter nonprofit governance? Evidence from New
York City nonprofit contractors. Journal of Policy Analysis and Management, 21, 359–379.
Oster, S. M. (1995). Strategic management for nonprofit organizations: Theory and cases. New York, NY:
Oxford University Press.
Penna, R. M. (2011). The nonprofit outcomes toolbox: A complete guide to program effectiveness, performance
measurement, and results. Hoboken, NJ: John Wiley.
Peterson, P. A. (1986). From impresario to arts administrator. In P. DiMaggio (Ed.), Nonprofit enterprise and
the arts (pp. 161–183). New York, NY: Oxford University Press.
Pettigrew, A. M., Woodman, R. W., & Cameron, K. S. (2001). Studying organizational change and development:
Challenges for future research. Academy of Management Journal, 44, 697–713.
Pfeffer, J., & Salancik, G.R. (2003). An external perspective on organizations. In The External Control of Orga-
nizations, (pp. 1–22). Stanford, CA: Stanford University Press.
Poister, T. H. (2003). Measuring performance in public and nonprofit organizations. San Francisco, CA: Jossey-
Bass.
Poole, D. L., Duvall, D., & Wofford, B. (2006). Concept mapping key elements and performance measures in a
state nursing home-to-community transition project. Evaluation and Program Planning, 29, 10–22.
Poole, D., Nelson, J., Carnahan, S., Chepenik, N. G., & Tubiak, C. (2000). Evaluating performance measure-
ment systems in nonprofit agencies: The Program Accountability Quality Scale (PAQS). American Journal
of Evaluation, 21, 15–26.
Powell, W. W., & Friedkin, R. (1986). Politics and programs: Organizational factors in public television deci-
sion making. In P. DiMaggio (Ed.), Nonprofit enterprise in the arts (245–269). New York, NY: Oxford Uni-
versity Press.
Poister, T. H. (2010). The future of strategic planning in the public sector: Linking strategic management and
performance. Public Administration Review, 70, 246–254.
Radin, B. (2006). Challenging the performance movement: Accountability, complexity, and democratic values.
Washington, DC: Georgetown University Press.
Rethemeyer, R. K., & Hatmaker, D. M. (2008). Network management reconsidered: An inquiry into manage-
ment of network structures in public sector service provision. Journal of public administration research and
theory, 18, 617–646.
Ritchie, W. J., & Kolodinsky, R. W. (2003). Nonprofit organization financial performance measurement: An evalua-
tion of new and existing financial performance measures. Nonprofit Management & Leadership, 13, 367–381.
Rossi, P. H., Lipsey, M. W., & Freeman, H. E. (2004). Evaluation: A systemic approach (7th ed.). London,
England: Sage.
Roy, C., & Seguin, F. (2000). The institutionalization of efficiency-oriented approaches for public service
improvement. Public Productivity & Management Review, 23, 449–468.
Russ-Eft, D., & Preskill, H. (2009). Evaluation in organizations: A systematic approach to enhancing learning,
performance, and change (2nd ed.). Cambridge, MA: Perseus Press.
Salamon, L. M. (2002). The resilient sector: The state of nonprofit America. In L. Salamon (Ed.), The state of
nonprofit America (pp. 3–63). Washington, DC: Brookings Institution Press.
Sawhill, J. C., & Williamson, D. (2001). Mission impossible? Measuring success in nonprofit organizations.
Nonprofit and Voluntary Sector Quarterly, 11, 371–386.
Scott, W. R. (1995). Institutions and organizations (2nd ed.). Thousand Oaks, CA: Sage.
Sharfman, M. P., Gray, B., & Yan, A. (1991). The context of interorganizational collaboration in the garment
industry: An institutional perspective. Journal of Applied Behavioral Science, 27, 181–208.
Simon, H. (1982). Models of bounded rationality: Behavioral economics and business organization. Cam-
bridge, MA: MIT Press.
Skinner, D. (2004). Evaluation and change management: rhetoric and reality. Human Resource Management
Journal, 14, 5–19.
Sobelson, R. K., & Young, A. C. (2013). Evaluation of a federally funded workforce development program:
The Centers for Public Health Preparedness. Evaluation and Program Planning, 37, 50–57.
Sowa, J. E. (2009). The collaboration decision in nonprofit organizations views from the front line. Nonprofit
and Voluntary Sector Quarterly, 38, 1003–1025.
Sowa, J. E., Selden, S. C., & Sandfort, J. R. (2004). No longer unmeasurable? A multidimensional integrated
model of nonprofit organizational effectiveness. Nonprofit and Voluntary Sector Quarterly, 33, 711–728.
Talbot, C. (2008). Measuring public value: A competing value approach. London, England: The Work
Foundation.
Thomson, D. E. (2010). Exploring the role of funders’ performance reporting mandates in nonprofit perfor-
mance measurement. Nonprofit and Voluntary Sector Quarterly, 39, 611–629.
Valley of the Sun United Way. (2008). Logic model handbook. Retrieved from http://www.vsuw.org/file/logic_
model_handbook_updated_2008.pdf
Van Slyke, D. M. (2007). Agents or stewards: Using theory to understand the government-nonprofit social ser-
vice contracting relationship. Journal of Public Administration Research and Theory, 17, 157–187.
W. K. Kellogg Foundation. (2004). Logic model development guide. Retrieved from http://www.wkkf.org/
knowledge-center/resources/2006/02/wk-kellogg-foundation-logic-model-development-guide.aspx
Williams, C. B., Fedorowicz, J., & Tomasino, A. P. (2010, May). Governmental factors associated with state-
wide interagency collaboration initiatives. In Proceedings of the 11th Annual International Digital Govern-
ment Research Conference on Public Administration Online: Challenges and Opportunities (pp. 14–22).
Digital Government Society of North America.
Williamson, O. (1981). The economics of organization: The transaction cost approach. The American Journal
of Sociology, 87, 548–577.
Williamson, O. (1999). Public and private bureaucracies: A transaction cost economics approach. The Journal
of Law, Economics, and Organization, 15, 306–342.
Yang, K. (2009). Examining perceived honest performance reporting by public organizations: Bureaucratic pol-
itics and organizational practice. Journal of Public Administration Research and Theory, 19, 81–105.