Why Measure Performance? Different Purposes Require Different Measures
Why Measure Performance? Different Purposes Require Different Measures
Behn
Harvard University
Performance measurement is not an end in itself. So why should public managers measure perfor-
mance? Because they may find such measures helpful in achieving eight specific managerial pur-
poses. As part of their overall management strategy, public managers can use performance mea-
sures to evaluate, control, budget, motivate, promote, celebrate, learn, and improve. Unfortu-
nately, no single performance measure is appropriate for all eight purposes. Consequently, public
managers should not seek the one magic performance measure. Instead, they need to think seri-
ously about the managerial purposes to which performance measurement might contribute and
how they might deploy these measures. Only then can they select measures with the characteristics
necessary to help achieve each purpose. Without at least a tentative theory about how perfor-
mance measures can be employed to foster improvement (which is the core purpose behind the
other seven), public managers will be unable to decide what should be measured.
Notes
1. Okay, not everyone is measuring performance. From a sur- There is, however, no guarantee that every use of performance
vey of municipalities in the United States, Poister and Streib measures to budget or celebrate will automatically enhance
find that “some 38 percent of the [695] respondents indicate performance. There is no guarantee that every controlling or
that their cities use performance measures, a significantly motivational strategy will improve performance. Public man-
lower percentage than reported by some of the earlier sur- agers who seek evaluation or learning measures as a step to-
veys” (1999, 328). Similarly, Ammons reports on municipal ward improving performance need to think carefully not only
governments’ “meager record” of using performance mea- about why they are measuring, but also about what they will
sures (1995, 42). And, of course, people who report they are do with these measurements and how they will employ them
measuring performance may not really be using these mea- to improve performance.
sures for any real purpose. Joyce notes there is “little evi- 5. Jolie Bain Pillsbury deserves the credit for explicitly point-
dence that performance information is actually used in the ing out to me that distinctly different purposes for measuring
process of making budget decisions” (1997, 59). performance exist. On April 16, 1996, at a seminar at Duke
2. People can measure the performance of (1) a public agency; University, she defined five purposes: evaluate, motivate,
(2) a public program; (3) a nonprofit or for-profit contractor learn, promote, and celebrate (Behn 1997b).
that is providing a public service; or (4) a collaborative of Others, however, have also observed this. For example,
public, nonprofit, and for-profit organizations. For brevity, I Osborne and Gaebler (1992), in their chapter on “Results
usually mention only the agency—though I clearly intend Oriented Government,” employ five headings that capture five
my reference to a public agency’s performance to include the of my eight purposes: “If you don’t measure results, you can’t
performance of its programs, contractors, and collaboratives. tell success from failure” (147) (evaluate); “If you can’t see
3. Although Hatry provides the usual list of different types of success, you can’t reward it” (148) (motivate); “If you can’t
performance information—input, process, output, outcome, see success, you can’t learn from it” (150) (learn); “If you
efficiency, workload, and impact data (1999b, 12)—when can’t recognize failure, you can’t correct it” (152) (improve);
discussing his 10 different purposes (chapter 11), he refers “If you can demonstrate results, you can win public support”
almost exclusively to outcome measures. (154) (promote).
4. These eight purposes are not completely distinct. For example, 6. Anyone who wishes to add a purpose to this list should also
learning itself is valuable only when put to some use. Obvi- define the characteristics of potential measures that will be
ously, two ways to use the learning extracted from perfor- most appropriate for this additional purpose.
mance measurements are to improve and to budget. Simi- 7. But isn’t promoting accountability a quite distinct and also
larly, evaluation is not an ultimate purpose; to be valuable, very important purpose for measuring performance? After
any evaluation has to be used either to redesign programs (to all, scholars and practitioners emphasize the connection be-
improve) or to reallocate resources (to budget) by moving tween performance measurement and accountability. Indeed,
funds into more valuable uses. Even the budgetary purpose it is Hatry’s first use of performance information (1999b, 158).
is subordinate to improvement. In a report commissioned by the Governmental Accounting
Indeed, the other seven purposes are all subordinate to im- Standards Board on what it calls “service efforts and accom-
provement. Whenever public managers use performance plishments [SEA] reporting,” Hatry, Fountain, and Sullivan
measures to evaluate, control, budget, motivate, promote, (1990) note that SEA measurement reflects the board’s de-
celebrate, or learn, they do so only because these activities— sire “to assist in fulfilling government’s duty to be publicly
they believe or hope—will help to improve the performance accountable and … enable users to assess that accountabil-
of government. ity” (2). Moreover, they argue, without such performance
measures, elected officials, citizens, and other users “are not
References
Ammons, David N. 1995. Overcoming the Inadequacies of Per- Blodgett, Terrell, and Gerald Newfarmer. 1996. Performance
formance Measurement in Local Government: The Case of Measurement: (Arguably) The Hottest Topic in Government
Libraries and Leisure Services. Public Administration Review Today. Public Management, January, 6.
55(1): 37–47. Boone, Harry. 1996. Proving Government Works. State Govern-
Ammons, David N., Charles Coe, and Michael Lombardo. 2001. ment News, May, 10–12.
Performance-Comparison Projects in Local Government: Par- Bruns, William J. 1993. Responsibility Centers and Performance
ticipants’ Perspectives. Public Administration Review 61(1): Measurement. Note 9-193-101. Boston, MA: Harvard Busi-
100–10. ness School.
Anthony, Robert N. 1988. The Management Control Function. Coe, Charles. 1999. Local Government Benchmarking: Lessons
Boston, MA: Harvard Business School Press. from Two Major Multigovernment Efforts. Public Adminis-
Anthony, Robert N., and Vijay Govindarajan. 1998. Manage- tration Review 59(2): 110–23.
ment Control Systems. 9th ed. Burr Ridge, IL: McGraw-Hill/ Duncan, W. Jack. 1989. Great Ideas in Management: Lessons
Irwin. from the Founders and Foundations of Managerial Practice.
Anthony, Robert, and David W. Young. 1999. Management Con- San Francisco, CA: Jossey-Bass.
trol in Nonprofit Organizations. 6th ed. Burr Ridge, IL: Feynman, Richard. 1965. The Character of Physical Law. Cam-
McGraw-Hill/Irwin. bridge, MA: MIT Press.
Bardach, Eugene. 1998. Getting Agencies to Work Together: The Ghosh, Dipankar, and Manash R. Ray. 2000. Evaluating Mana-
Practice and Theory of Managerial Craftsmanship. Wash- gerial Performance: Mitigating the “Outcome Effect.” Jour-
ington, DC: Brookings Institution. nal of Managerial Issues 12(2): 247–60.
Baron, Jonathan, and John C. Hershey. 1988. Outcome Bias in Hatry, Harry P. 1999a. Mini-Symposium on Intergovernmental
Decision Evaluation. Journal of Personality and Social Psy- Comparative Performance Data. Public Administration Re-
chology 54(4): 569–79. view 59(2): 101–4.
Behn, Robert D. 1991. Leadership Counts: Lessons for Public ———. 1999b. Performance Measurement: Getting Results.
Managers. Cambridge, MA: Harvard University Press. Washington, DC: Urban Institute.
———. 1992. Management and the Neutrino: The Search for Hatry, Harry P., James R. Fountain, Jr., Jonathan M. Sullivan.
Meaningful Metaphors. Public Administration Review 52(5): 1990. Overview. In Service Efforts and Accomplishments
409–19. Reporting: Its Time Has Come, edited by Harry P. Hatry, James
———. 1996. The Futile Search for the One Best Way. Govern- R. Fountain, Jr., Jonathan M. Sullivan, and Lorraine Kremer,
ing, July, 82. 1–49. Norwalk, CT: Governmental Accounting and Standards
———. 1997a. The Money-Back Guarantee. Governing, Sep- Board.
tember, 74. Hatry, Harry P., James R. Fountain, Jr., Jonathan M. Sullivan,
———. 1997b. Linking Measurement to Motivation: A Chal- and Lorraine Kremer. 1990. Service Efforts and Accomplish-
lenge for Education. In Improving Educational Performance: ments Reporting: Its Time Has Come. Norwalk, CT: Govern-
Local and Systemic Reforms, Advances in Educational Ad- mental Accounting and Standards Board.
ministration 5, edited by Paul W. Thurston and James G. Ward, Hawkins, Scott A., and Reid Hastie. 1990. Hindsight: Biased
15–58. Greenwich, CT: JAI Press. Judgements of Past Events after the Outcomes are Known.
———. 1999. Do Goals Help Create Innovative Organizations? Psychological Bulletin 107(3): 311–27.
In Public Management Reform and Innovation: Research, Hershey, John C., and Jonathan Baron. 1992. Judgment by Out-
Theory, and Application, edited by H. George Frederickson comes: When Is It Justified? Organizational Behavior and
and Jocelyn M. Johnston, 70–88. Tuscaloosa, AL: University Human Decision Processes 53(1): 89–93.
of Alabama Press. Holloway, Jacky A., Graham A.J. Francis, and C. Matthew
———. 2001. Rethinking Democratic Accountability. Washing- Hinton. 1999. A Vehicle for Change? A Case Study of Per-
ton, DC: Brookings Institution. formance Improvement in the “New” Public Sector. Interna-
tional Journal of Public Sector Management 12(4): 351–65.