insights
12
measuring personal
outcomes: challenges
and strategies
Emma Miller, Glasgow
School of Social Work
December 2011
Key points
• A focus on personal outcomes offers the
potential to refocus on what matters to
people who use services, with potential
benefits for the individuals involved, staff
and organisations.
• It is important to be clear about the purpose
of measuring outcomes. In particular,
whether the measurement is primarily for
improvement purposes or for judgement – in
practice it may well be both.
• There is potential to link outcomes
measurement to the organisational value
base and a range of approaches and tools
are emerging to support this.
• There are many identified challenges of
measuring outcomes, but the evidence
highlights various recommendations and
strategies that can help.
• Outcomes tools are sometimes designed
with a very specific user group in mind,
whilst others can be used more generally
with different user groups.
insights 12
measuring personal outcomes: challenges and strategies
Measuring outcomes
Policy context: Scotland
Defining outcomes
For many years there has been an emphasis
on measuring the outcomes of human
services. It is important to distinguish between
personal outcomes, which are defined by
the individual, and outcomes, which are
pre-determined by the service on behalf of
beneficiaries. The reasons for measuring
personal outcomes can be understood from
various perspectives. Research demonstrates
that it cannot be assumed that service users’
views on their outcomes will correspond with
those of organisations and practitioners (Felton
2005). Further, for people who use services
and their families, being involved in defining
the outcomes they want to achieve can be
empowering and result in increased relevance
of support (Qureshi 2001, Beresford et al 2011).
For staff, working with individuals to develop
outcome-focused plans, and reviewing the
outcomes achieved can help achieve clarity of
purpose (Thompson 2008). For organisations,
an outcomes approach can help to reconnect
with their value base and ensure that they
are focussed on the difference they make
to people’s lives, as well as the activities
undertaken (Miller 2011). Measuring outcomes
is not enough in itself but can provide the
‘missing piece of the information jigsaw’ in
relation to evaluating and improving services,
and increasing accountability to the public and
regulatory bodies. This Insight will consider
some of the challenges of measuring outcomes
and emerging responses to these.
Outcomes have been emphasised in Scottish
policy for several years. Better outcomes
for older people (Scottish Executive 2004)
strongly advocated an outcomes focus. In
2006 the Scottish Government stated that
less time should be spent on measuring what
goes into services and how money has been
spent, and that more time should be invested
on what funding achieves for individuals and
communities (Scottish Government 2006). This
was followed by the overarching Single Outcome
Agreement (SOA) (Scottish Government 2007),
which set out a new relationship between
central and local government, allowing for
more flexibility at the point of delivery. Sitting
underneath the overarching SOA is Getting
it Right for Every Child (GIRFEC) (Scottish
Government, 2008a) the Community Care
Outcomes Framework (Scottish Government
2008) and the National Outcomes and Standards
for Criminal Justice (Scottish Government 2010).
The Housing Support Enablement Unit also
recently produced a specific tool for relevant
providers (HSEU 2011).
Key evaluation concepts can be defined
as detailed in Table 1.
The Social Policy Research Unit identified
three main categories of outcome, which their
research found to be important to people using
social care services:
Quality of Life outcomes (or maintenance
outcomes) are the aspects of a person’s whole
life that they are working to achieve or maintain.
Process outcomes relate to the experience that
individuals have seeking, obtaining and using
services and supports.
Change outcomes relate to the improvements
in physical, mental or emotional functioning
that individuals are seeking from any particular
service intervention or support (Qureshi et al
2001).
Table 1: Summary of main definitions
Term
Definition
Inputs
All the resources a group needs to carry out its activities
Activities
The actions, tasks and work a project or organisation carries out to create
its outputs and outcomes, and achieve its aims
Outputs
Products, services or facilities that result from an organisation’s
or project’s activities
Outcomes
The changes, benefits, learning or other effects that result from what the
project or organisation makes, offers or provides
Impact
Broader or longer-term effects of a project’s or organisation’s outputs,
outcomes and activities
Table adapted from: (Wainwright 2002, Charities Evaluation Services, 2004)
2
www.iriss.org.uk
Specific services may emphasise particular
types of outcome but research has shown
that there are benefits to considering the
different categories of outcome. For example,
Beresford and Branfield (2006) caution
against a tendency in service-led discussions
about evaluation to separate process from
outcome because their research with service
users demonstrated that the process, or how
services engage with people, is inseparable
from, and shapes the outcome.
Table 2: Characteristics of indicators used for judgement (reporting for external scrutiny and
comparison) and improvement (using information to make improvements within the organisation)
Indicators for judgement
Indicators for improvement
Unambiguous interpretation
Variable interpretation possible
Unambiguous attribution
Ambiguity tolerable
Definitive marker of quality
Screening tool
Good data quality
Poor data quality tolerable
Good risk-adjustment
Partial risk-adjustment tolerable
Statistical reliability necessary
Statistical reliability preferred
Cross-sectional
Time trends
Challenges with
measuring outcomes
Used for punishment/reward
Used for learning/changing practice
For external use
Mainly for internal use
Stand-alone
Allowance for context possible
Despite the long-standing policy focus,
measuring outcomes remains challenging.
Some of the key challenges are outlined below.
Risk of unintended consequences
Lower risk of unintended consequences
Table adapted from: (Raleigh and Foot 2010)
1. Clarity of purpose
It is important to be clear about the purpose
of measuring outcomes. In particular, there
is the question of whether the measurement
is primarily for improvement purposes or for
judgement:
In the former case, the information is used as a
‘tin opener’ for internal use, designed to prompt
further investigation and action where needed,
and not as a definitive measure of performance
in itself. In the latter case, the information is
used as a ‘dial’ – an unambiguous measure
of performance where there is no doubt
about attribution, and which may be linked
to explicit incentives for good performance
(pay for performance) and sanctions for poor
performance (Raleigh and Foot, 2010 p6).
In practice, most systems will need to consider
measurement both for improvement and
judgement. The emphasis given to each
can result in very different approaches to
the selection of measures, collection of data
and interpretation and use, which in turn will
influence the culture of the organisation.
2. Measurable or meaningful?
One of the policy priorities in service
improvement is that the results should be
measurable. Recent research highlighted the
limitations of quality measurement, including
the tendency to miss areas where evidence
or data are not available, and to exclude less
quantifiable aspects of quality (Raleigh and Foot
2010). This is of particular concern given that
what is deemed easy to measure can in turn
determine and limit the priorities and activities of
services. Further, the delivery of a quality service
does not necessarily guarantee good outcomes,
so measuring quality alone is not sufficient.
The evidence reveals the adverse effects of
prioritising external reporting, particularly in the
form of targets (Raleigh and Foot 2010), and the
risk of ‘severely dysfunctional consequences’
arising from performance systems which are
insufficiently vigilant to unintended effects
(Smith 2007, 304). Other research has shown
the importance of moving beyond a sole
focus on external accountability to the need
to link evaluation and measurement to the
organisational value base (Whitman 2008).
Further, it has been argued that measuring
the outcomes of a service should be part of
a wider shift of focus onto the person and
3
insights 12
their outcomes, and that without the shift of
focus, the outcomes tool may become another
form which is mechanistically completed by
practitioners (McKeith and Graham 2007). As
the Audit Commission notes, equally important
is the emphasis on involving staff:
Corporate leadership on data and information
quality is vital… However, one of the biggest
factors underlying poor data quality is the lack
of understanding among frontline staff of the
reasons for, and benefits of, the information
they are collecting. The information collected
is too often seen as irrelevant to patient care
and focused on the needs of the “centre”
rather than frontline service delivery (Audit
Commission 2004, p5).
A recent guide in Scotland focuses on the
critical role of staff in recording outcomes, and
includes some common errors and practical
examples (Miller and Cook 2011).
3. Hard and soft outcomes
Several authors highlight the limitations of
only focusing on ‘hard’ or easily measured
outcomes. In many such cases, what are
categorised as hard outcomes could be
described as outputs, such as numbers of
individuals completing a training course, or
numbers who achieve employment following a
training scheme. In contrast, soft outcomes give
a fuller picture of the overall value and success
of projects. Measuring soft outcomes is also
supported by inclusion of qualitative as well
as quantitative data. Although this presents its
own challenges in terms of data management,
resources are available to support this
(Evaluation Support Scotland 2009b).
4
measuring personal outcomes: challenges and strategies
Some funders, including the Big Lottery Fund,
require that soft outcomes are considered.
However interim findings from a longitudinal
study of the third sector in Scotland found that
many agencies were unable to demonstrate
their value because of the tendency of some
funders to focus on hard outcomes. The most
vulnerable users were viewed as missing out
because they were less likely to achieve quick
and measurable outcomes:
The focus on attaining quick, clear results with
clients had, it was argued, led to those with
some of the greatest need being overlooked
in the pursuit of targets. For instance, the
outcomes-focused approach encouraged
competition between services for groups of
clients who can easily have measurable ‘positive’
outcomes (Scottish Government 2011).
Recent research by the Standards We Expect
project in England examined the development
of person-centred support from the perspective
of service users, carers, practitioners and
frontline managers. They identified efforts
to develop ‘softer’ targets and measures
consistent with independent living as one of
the key developments in overcoming barriers to
person-centred support (Beresford et al 2011).
The following example illustrates the value and
inclusivity of focusing on soft outcomes:
Example: What would become of the 90 yearold widower who gained the confidence to learn
computing skills to write his autobiography for
his family... He will neither be getting a job nor
going on to accredited courses and yet the soft
outcomes keep him active and involved rather
than confined to a retirement home (Butcher
and Marsden 2004, p4).
4. Challenges of attribution
One of the most frequently cited challenges
of measuring personal outcomes is that of
establishing cause and effect, or attribution.
The challenge of isolating the impact of any
one service is further complicated where
there is multi-agency involvement (Ellis and
Gregory 2008). Was it the individual, their
family, the service, other services or other
factors that influenced the outcomes? A recent
Learning Point paper from the Improvement
Service (McGuire 2010) acknowledged the
complexity of attribution due to the number
of partners involved and the range of external
factors. Some agencies highlight the benefit of
obtaining the perspectives of users, carers and
staff to help to identify causal chains (Culpitt
and Ellis 2003).
5. Variation in service users
The final challenge of measuring outcomes
to be covered here is that of variation in the
characteristics of service users, which leads to
challenges of interpretation of data. This is not
unrelated to challenges of attribution. To avoid
unfair comparisons across different services,
account should be taken of such variations,
as responses can be influenced by service
user characteristics unrelated to the quality of
care, such as age, gender, region of residence,
self-reported health status, type of care and
expectations (Raleigh and Foot 2010).
www.iriss.org.uk
Recommendations/
strategies
There are no easy answers to many of
the identified challenges of measuring
outcomes, but the evidence highlights various
recommendations and strategies that can help,
and being mindful of these challenges can be a
useful starting point.
Theory driven evaluation
Theory driven evaluation provides an alternative
approach to traditional input-output approaches
to evaluation, and it has been suggested
that it is more suited to complex real-world
interventions. It involves the development of
a programme theory, which sets out what the
project planners expect from the intervention,
which means making implicit assumptions
explicit, and then checking out the programme
theory with staff and key stakeholders.
In brief, theory-driven evaluation first attempts
to map out the programme theory lying behind
the intervention and then designs a research
evaluation to test out that theory. The aim is not
to find out ‘whether it works,’ as the answer is
almost always ‘yes, sometimes’. The purpose is
to establish when, how and why the intervention
works, to unpick the complex relationships
between context, content, application and
outcomes, and to develop a necessarily
contingent and situational understanding of
effectiveness (Walshe 2007, p58).
Theory driven evaluation means developing a
hypothesis which can be tested out in practice.
Logic modelling, discussed below, is an
example of a theory driven approach.
Logic modelling
Logic modelling involves an organisation
(staff, users, carers etc) working to define the
endpoint that they want to reach, and then
consider what activities and processes are
required to achieve it. It can help organisations
adopt an outcomes approach by improving
their clarity about what they are aiming to
achieve. Guides are available to support the
development of a logic model (Evaluation
Support Scotland 2009a). The Charities
Evaluation Service (CES) has used logic
modelling to demonstrate how soft outcomes
can be viewed as outcomes in their own right
and can contribute to longer term or more
strategic outcomes (which could be applied to
the Single Outcome Agreement in Scotland).
Example: From inputs to long term change = The Women’s Project (Culpitt and Ellis 2003)
The Women’s Project aims to reduce unwanted teenage pregnancy
by offering support and group work to young women
Inputs
Outputs
Outcomes
Long-term change
Staff
One-to-one
support
Increased
confidence
Increased social
inclusion
Budget
Group work
Understand alternatives
to young parenthood
Reduced teenage
pregnancy
Venue
Outings
Be ambitious
Advertising
Able to access training
‘The Charities Evaluation Service
(CES) has used logic modelling to
demonstrate how soft outcomes can
be viewed as outcomes in their own
right and can contribute to longer
term or more strategic outcomes
(which could be applied to the Single
Outcome Agreement in Scotland).’
5
insights 12
A project might bring about changes before
reaching its final outcome. For example,
someone who using a drugs project is likely to
change in various ways before they stop using
drugs. The project may not always reach all
its final outcomes in its lifetime, or individuals
might move on before doing so, so it is
important to record changes on the way.
measuring personal outcomes: challenges and strategies
Example: Outcomes on the way = Employment Training Service (Culpitt and Ellis 2003)
PROJECT AIM
OUTCOMES ON THE WAY
LONG TERM OUTCOME
To reduce social
exclusion
Improve motivation and aspirations
Improved opportunity to re-enter
education and to find work
Improve confidence and self-esteem
Improve communication skills
Choosing or designing outcomes tools
Improve job search skills
There are many outcomes tools across service
sectors, with varying formats and content.
Although it is possible to find tools which
measure outcomes at one interval, it is more
common for outcomes to be measured at
least at two intervals, providing a picture of
the person’s journey towards their intended
outcomes. Outcomes tools are sometimes
designed with a very specific user group in
mind, whilst others can be used more generally
with different user groups. Earlier research on
measuring soft outcomes concluded that a
generic model for soft outcomes was neither
desirable nor achievable and that a flexible
approach was needed for interventions which
were holistic, integrated and geared to the
individual needs of users (Dewson et al 2000).
Increase work skills
Some agencies and organisations have reported
benefits from designing their own outcomes
tools. A key advantage is that the process of
engaging staff in designing a tool can develop
an outcomes orientation within the organisation
and promote ownership by staff. However,
some authors urge caution against investing
too much effort in devising the perfect tool, as
the tool should be seen as an accompaniment
and enabler, rather than a replacement for the
worker’s professional judgement (Butcher and
Marsden 2004). Where an agency decides to
develop their own tool, some guides recommend
6
Improved chance of qualifications
that they adapt an existing tool (MacKeith
and Graham 2007). The Coalition of Care and
Support Providers in Scotland has produced a
summary guide of existing tools (CCPS 2010),
including any costs where relevant.
Outcomes tools can be based on different
types of questions. Examples highlighted from
McKeith and Graham (2007) include concrete
questions, subjective scales which ask where
the person thinks they are in relation to a
specified outcome, and defined scales which
ask where the person is on a journey of change
towards an outcome, based on pre-determined
intervals. Other approaches such as Talking
Points (Cook and Miller 2010) adopt a more
flexible, conversational approach, structured
around a set of outcomes. Selection of the type
of question or structure of the tool should be
influenced by the relevant population. Concrete
questions and tightly specified pre-defined
scales can present challenges to people with
communication support needs.
SMART principles can be usefully employed
when discussing and recording outcomes.
Traditionally SMART outcomes have been
classified as the first definitions provided below,
as set out by Doran (1981). However, various
alternatives are in use and the definitions
highlighted in bold have been found to be more
compatible with outcomes approaches (Miller
and Cook 2011):
Specific (or Significant).
Measurable (or Meaningful).
Attainable (or Action-Oriented).
Relevant (or Rewarding).
Time-bound (or Trackable).
‘Some agencies and organisations
have reported benefits from designing
their own outcomes tools.’
www.iriss.org.uk
Conclusion
References
A focus on personal outcomes within human
services offers potential to refocus on what
matters to people who use those services, with
potential benefits for the individuals involved,
staff and organisations. Although outcomes
have been prevalent in policy for some time, a
range of challenges remain with regard to their
measurement. The key challenges covered in
this paper all relate to the meaningfulness of
measures. There is the need to decide whether
the emphasis is weighted towards measuring
for improvement or measuring for judgement
or externally driven performance management,
with concern that the improvement potential
can be compromised when the predominant
emphasis is judgement. Related considerations
are the selection of hard or soft outcomes and
the challenge of attribution. Acknowledging
these challenges is a necessary step in
progressing towards meaningful measurement.
Literature suggests that there is real potential
to link outcomes measurement to the
organisational value base and a range of
approaches and tools are emerging to support
this. There is also a significant role for funders
and policymakers in ensuring that agencies
involved in direct support are not over burdened
by demands for measures, which are system
rather than people driven.
Audit Commission (2004). Information and data
quality in the NHS: Key messages from three
years of independent review, London: Audit
Commission
Beresford, P. and Branfield, F. (2006) Developing
inclusive partnerships: User-defined outcomes,
networking and knowledge – a case study,
Health and Social Care in the Community, 14(5)
436–444
Beresford, P., Fleming, J. Glynn, M., Bewley,
C., Croft, S., Branfield, F. And Postle, K. (2011)
Supporting people: towards a person-centred
approach, London: Policy Press
Butcher, B. and Marsden, L. (2004) Measuring
soft outcomes: A review of the literature, The
Research Centre: City College Norwich
CCPS (2010) An outcomes approach in social
care and support: An overview of current
frameworks and tools, Edinburgh: CCPS
Charities Evaluation Services (2004)
Jargonbuster Issue 1. http://www.ces-vol.org.
uk/index.cfm?format=171
Cook. A. and Miller, E. (2010) Talking Points:
Personal outcomes approach update report
June 2010: focus on making use of information
on outcomes, Edinburgh: Joint Improvement
Team
Culpitt, S. and Ellis, J. (2003) Your project and
its outcomes, London: Charities Evaluation
Service
Dewson, S., Eccles, J., Tackey, N.D. and
Jackson, A. (2000) Guide to measuring soft
outcomes and distance travelled, Brighton, The
Institute for Employment Studies
Department of Health (2008) High quality care
for all: NHS next stage review final report,
London: Department of Health
Doran, G. T. (1981). There’s a S.M.A.R.T. way
to write management’s goals and objectives,
Management Review, Volume 70, Issue 11(AMA
FORUM), pp. 35–36
Evaluation Support Scotland (2009a)
Developing a logic model http://www.
evaluationsupportscotland.org.uk/downloads/
Supportguide1.2logicmodelsJul09.pdf
Evaluation Support Scotland (2009b) Using
qualitative information for evaluation
http://www.evaluationsupportscotland.org.uk/
downloads/SupportGuide3.4qualyinfoJul09.pdf
Felton, K. (2005) Meaning-based quality of life
measurement: A way forward in conceptualising
and measuring client outcomes? British Journal
of Social Work 35, 221–236
Housing Support Enablement Unit (2011)
Better futures outline http://www.ccpscotland.
org/assets/files/hseu/information/Better%20
Futures/10.03.11%20-%20Launch%20Handout.pdf
MacKeith, J. and Graham, K. (2007) A practical
guide to outcomes tools, London: Triangle
Consulting
McGuire, A. (2010) Learning Point 57: Outcome
focused targets, Edinburgh: The Improvement
Service
Miller, E. and Cook, A. (2011) Recording
outcomes: The critical link between
engagement and improvement, Edinburgh,
Joint Improvement Team
Miller, E. (2011) Individual outcomes in health
and social care, Edinburgh, Dunedin
7
Qureshi, H. (ed) (2001) Outcomes in social care
practice, York: SPRU
Raleigh, V.S and Foot, C. (2010) Getting
the measure of quality: Opportunities and
challenges, London: The Kings Fund
Scottish Executive (2004) Better outcomes for
older people – A framework for joint services
for older people, Edinburgh: Scottish Executive,
Joint Services Group.
Scottish Government (2006) Transforming
public services: The next phase of reform,
Edinburgh: Scottish Government
Scottish Government (2007) Concordat
between the Scottish Government and Local
Government, Edinburgh: Scottish Government
and COSLA
Scottish Government (2008a) A guide to getting
it right for every child, Edinburgh: Scottish
Government
Scottish Government (2008b) Final definitions:
Community care outcomes framework, CEL
issued on behalf of Scottish Government,
COSLA and NHS Scotland, Edinburgh: Scottish
Government
Scottish Government (2010) National outcomes
and standards for social work services in the
criminal justice system, Edinburgh: Scottish
Government
www.iriss.org.uk
Scottish Government (2011) The opportunities
and challenges of the changing public services
landscape for the third sector in Scotland: A
longitudinal study year one report (Baseline
Findings), Edinburgh: The Scottish Government
Smith, P. (2007) On the unintended
consequences of publishing performance data
in the public sector, International Journal of
Public Administration, 18, 2–3, 277–310
Thompson, N. (2008) Focusing on outcomes:
Developing systematic practice, Practice 20(1)
5–16
Wainwright, S. (2002) Measuring impact: A
guide to resources, London: NCVO
Walshe, K. (2007) Understanding what works –
and why – in quality improvement: The need for
theory-driven evaluation, International Journal
for Quality in Health Care, 19(2), 57–9
Whitman, J. (2008) Evaluating philanthropic
foundations according to their social values,
Nonprofit Management and Leadership, 18(4)
417–434
Acknowledgements
This Insight was reviewed by Karen Barrie
(Healthcare Improvement Scotland), Lisa Burton
(Inverclyde Council), Richard Fowles (Care
Inspectorate), Joyce Lorimer (Moray Council),
Steven Marwick (Evaluation Support Scotland),
Erik Sutherland (East Renfrewshire Council).
enquiries@iriss.org.uk
The Institute for Research and Innovation in Social Services (IRISS) is a charitable company limited by guarantee. Registered in
Scotland: No 313740. Scottish Charity No: SC037882. Registered Office: Brunswick House, 51 Wilson Street, Glasgow, G1 1UZ
This work is licensed under the Creative Commons Attribution-Non Commercial-Share Alike 2.5 UK:
Scotland Licence. To view a copy of this licence, visit www.creativecommons.org/licenses/by-nc-sa/2.5/
scotland/ Copyright © 2011
Design—www.publishingbureau.co.uk