Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Developing A Research Agenda For Impact

Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

Developing a Research Agenda for

Impact Evaluation in Development


Patricia J. Rogers and Greet Peersman

Abstract This article sets out what would be required to develop a research agenda for impact evaluation.
It begins by explaining why it is needed and what process it would involve. It outlines four areas where
research is needed – the enabling environment, practice, products and impacts. It reviews the different
research methods that can be used to research impact evaluation and argues for particular attention to
detailed, theory-informed, mixed-method comparative case studies of the actual processes and impacts of
impact evaluation. It explores some examples of research questions that would be valuable to focus on and
how they might be addressed. Finally, it makes some suggestions about the process that is needed to create
a formal and collaborative research agenda.

1 Introduction research questions that would be valuable to


In recent years, there has been heated discussion focus on and how they might be addressed – not
about the best ways to do impact evaluation, to provide a definitive review of each topic but to
driven in large part by concerns about the illustrate the scope and approach needed. Finally,
consequences of doing it badly – erroneous it makes some suggestions about the process that
decisions about which programmes to invest in, is needed to create a research agenda that is not
and an inability to advocate for ongoing funding just a wish list or arena for fighting for resources
for development programmes. Much of this by evaluators, but a productive collaboration
debate has focused on methods and designs for among the various parties needed to bring the
causal attribution, but there are other aspects of research agenda to life.
impact evaluation that have also been debated
vigorously. The irony is that, although impact 2 Why do we need research on impact
evaluation is intended to ensure that decisions evaluation in development?
about practice and policy are informed by Sometimes, ‘impact evaluation’ and ‘development’
evidence, the various arguments about impact are understood narrowly. We argue that a broad
evaluation have rarely been based on systematic description of both is needed.
research.
2.1 Defining impact evaluation and development
This article provides a starting point for the As in many areas of evaluation, there are
development of a formal and collaborative different definitions and conceptualisations of
research agenda. It begins by defining why a impact evaluation. Some definitions restrict
research agenda is needed and what it would impact evaluation to studies which use particular
cover. It outlines four areas of impact evaluation designs – for example, the United States Agency
where research is needed – enabling for International Development (USAID)
environment, practice, products and impacts. It Evaluation Policy defines impact evaluation as
reviews the different methods that can be used to involving a constructed counterfactual such as a
research impact evaluation and argues for control or comparison group:
particular attention to detailed, theory-informed,
mixed-method comparative case studies of the Impact evaluations are based on models of
actual processes and impacts of impact cause and effect and require a credible and
evaluation. It explores some examples of rigorously defined counterfactual to control
IDS Bulletin Volume 45 Number 6 November 2014 © 2014 The Authors. IDS Bulletin © 2014 Institute of Development Studies
Published by John Wiley & Sons Ltd, 9600 Garsington Road, Oxford OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA

85
for factors other than the intervention that are also engaged in delivery. Last but not least, it
might account for the observed change needs to include impact evaluations that treat
(USAID 2011: 1). communities as users of evaluations, drivers of
development effectiveness and agents of
In this article, impact evaluation covers any evaluation themselves.
evaluation which assesses actual or likely impacts
– the Organisation for Economic Co-operation The research agenda needs to include all scales
and Development-Development Assistance of intervention – not only individual projects, but
Committee (OECD-DAC) defines impacts as also programmes, multiple projects as part of a
‘positive and negative, primary and secondary single programme, strategies and policies. It
long-term effects produced by a development should also include impact evaluations that look
intervention, directly or indirectly, intended or at when particular intervention types are
unintended’ (OECD-DAC 2010: 24). This suitable – projects which aim to catalyse or
implies that an impact evaluation has to address coordinate, impact investment, pay-for-
longer term results, but not necessarily directly; performance, or capacity development.
it could use other data to make links to likely
longer term results, and include ex ante and ex post 2.2 The need for a research agenda on impact evaluation
facto impact evaluation. What is particular about Impact evaluation can make an important
impact evaluation is that it seeks causal contribution to development. The results can
inference, understanding the role of particular inform decisions about what to invest in, and in
interventions in producing change. This essential what situations, and how to adapt successful
characteristic of impact evaluation determines projects, programmes and policies for new
its importance as a public good in terms of situations. Evidence of effective development
producing evidence of ‘What works?’ and ‘What interventions can be used to advocate for
works for whom in what contexts?’ continuing or increased funding, especially in a
climate of increasing scepticism about the value
By development, we are referring not only to of international aid. The process of impact
projects funded by international aid but also to evaluation can improve communication between
programmes, projects, policies and strategies stakeholders and focus attention on results. It
that are funded through various means with the can support the principles of effective
aim of improving health and welfare. development, including partnerships and local
agency. But impact evaluation can also harm
Some of the conceptual maps of impact evaluation development. Poor quality impact evaluation
in development have only included particular (either using methods and processes poorly, or
types of development, particular types of impact using inappropriate ones) can provide invalid,
evaluation, and particular aspects of impact misleading or overly simplified findings. These
evaluations. Much of the discussion has focused on can lead to poor decisions, such as scaling up
causal inference methods in experimental or interventions that are ineffective or harmful, or
quasi-experimental impact evaluation of discrete that are implemented in situations where they
donor-funded aid projects in order to inform don’t have a chance to work. Poor quality impact
decisions about scaling up interventions that have evaluation processes can undermine
been found to be effective. developmental processes, reinforcing power
disparities and reducing accountability to
A research agenda on impact evaluation needs to communities.
include the larger map of development – not just
donor-funded projects, but country-led Concerns about the impact of poor quality
programmes and policies, public–private impact evaluation have led to vigorous and
partnership projects and civil society development sometimes vitriolic debates about appropriate
interventions. Consequently, it needs to include methods for impact evaluation. For example, at a
impact evaluations for a range of different users – symposium on evidence-based policy, the
donors and national governments are important, alternatives to experimental and quasi-
but also the decentralised level of government experimental designs were summarised as
responsible for implementation, non-governmental performance measures, customer satisfaction
organisations (NGOs) and the private sector who and ‘charlatans’ (Smith and Sweetman 2009: 85).

86 Rogers and Peersman Developing a Research Agenda for Impact Evaluation in Development
However, recommendations for practice have transparent, standards-based and consultative
rarely been based on systematic and empirical process was then used to identify key information
evidence. It is difficult to secure funding for gaps and to prioritise evaluation studies.
research into evaluation, and there are few
incentives for organisations to collaborate on the Bringing users of evaluation findings and
sorts of research that would be needed. An evaluators together helped to ensure that
international research agenda for impact selected studies were pertinent to the decision-
evaluation would help to build a much needed making needs within the national AIDS
evidence base for more effective and appropriate programme (at all implementation levels) rather
impact evaluation. The research agenda could than just serving the needs of research
provide a focus for research and an impetus and institutions, evaluators or funders. It also helped
incentive for joint research across the various to identify where common interests could be
sectors, disciplines and organisations involved in galvanised and unnecessary duplication avoided.
impact evaluation of development. It would help There was also more synergy between new and
to secure commitment and resources for completed evaluation studies and a greater
research and to prioritise where these might be willingness to share evaluation findings. A clear
applied best. It could support agreements about rationale and a costed plan for the
priority areas for research and appropriate implementation of prioritised studies helped to
methods for doing this research, and help to mobilise the funding needed (NAMc 2010).
make better use of the research that is done by
supporting synthesis and dissemination. Understanding what was already known (and
thus, where important information gaps exist)
3 What is a research agenda and how should it was an essential preparatory step in helping to
be developed? decide evaluation priorities. However, it proved a
To be effective, a research agenda cannot simply time-consuming and challenging task as the
be a wish list developed by researchers, nor an information was often scattered and not always
ambit claim developed by a self-selected group. available in the public domain. Hence, sufficient
It needs to be inclusive, transparent and resources and time need to be provided to do this
defensible. To maximise uptake of the findings, it step well.
needs to encompass strategies and processes for
engaging intended end users in the research Involving a range of different stakeholders with
process, including in the process of identifying different interests, understandings and/or
and deciding research priorities. capacities for evaluation required consensus-
building as well as capacity development. These
Some recent examples from the public health additional efforts allowed for the perspectives of
arena may provide useful insights in terms of different stakeholders to be heard and
what is needed to get to a research agenda on appropriately accommodated. It was particularly
impact evaluation in development (see, for important to conduct the prioritisation of
example, MOHSS/DSP 2010; NAMc 2010; evaluation studies in a transparent manner and
Peersman 2010). In response to a call for an according to agreed criteria such as, for example,
increased focus on programme evaluation to the following considerations:
improve national HIV responses, the Joint United
Nations Programme on HIV/AIDS (UNAIDS) 1 The study needs to address an important data
supported governments1 in the implementation of gap for improving the national AIDS programme:
a national evaluation agenda for HIV. The first step important – the potential for impact of the
in the process was to develop a national findings is high; addresses ‘need to know’ not
evaluation strategy describing the rationale and ‘nice to know’
objectives for targeted programme evaluations data gap – the question cannot already be
and the procedures and infrastructure for answered by existing studies, available data
coordination and management of the studies. or information
Formal agreements build on existing roles and programme improvement – the evaluation provides
responsibilities rather than setting up parallel information on what can be done better in
systems and capitalise on the comparative terms of programme implementation,
strengths of different organisations involved. A effectiveness, and/or efficiency;

IDS Bulletin Volume 45 Number 6 November 2014 87


2 The study addresses an immediate need – i.e. it designs that are acceptable or encouraged. It could
provides timely data needed for key decisions include causal research that identified the factors
in the next five years; that produced these variations across organisations.
3 A good quality study is feasible (time frame, And, it could include evaluative research that
capacity, cost). compared these guidances to quality standards
and made judgements about their quality.
The consensus-driven process was facilitated by
an honest broker, someone with both evaluation Evaluation is not a technology that can be simply
and facilitation experience and who did not have imported to new areas of application, but a
a stake in what was being prioritised by whom. A practice that needs to be undertaken in ways
first cut at prioritisation was achieved by that suit what is being evaluated and the
facilitated discussions in small multi-stakeholder situation of the evaluation. This means that we
groups, the results of which were consolidated in would need detailed descriptions of the context
plenary discussion. and what is done as well as what the
consequences were. In addition, contingent
Although the prioritisation of studies focused on advice needs to be developed for what methods
addressing important information gaps in the and processes to use for particular situations,
short to medium term, the institutionalisation of and how to support good impact evaluation.
the procedures and infrastructure allowed for
the process to be repeated and address new These different types of research are discussed in
information needs over time. more detail in the following sections, and
illustrative examples of research questions, some
4 What needs to be researched? focused on an individual impact evaluation and
Research is needed into four different aspects of some on more than one evaluation, are included
impact evaluation: the enabling environment in Table 1.
(policies, guidelines, guidance, formal and
informal requirements and resources); practice 4.1 The enabling environment for impact evaluation
(how impact evaluation is actually undertaken); Individual impact evaluations operate within a
products (the reports and other artefacts larger context of local guidance, policy, capacity
produced by impact evaluations); and the impacts development and formal and informal incentives.
of impact evaluation, including intended uses and However, these are not always available for
other impacts. Some research will focus on only external scrutiny. Research could document the
one of these but particularly useful research variations and develop typologies of different
would link these, building evidence that could be types used. This would be useful as a resource for
used to develop contingent recommendations other organisations to use and adapt, rather than
about the types of enabling environment, re-invent the wheel. They would also be useful to
practices and products that are likely to produce combine with research into practice, products
beneficial impacts, and how to achieve them. and impacts to develop knowledge about what
types of guidance and enabling environment are
Across these different areas, different types of effective in supporting quality impact evaluation
research are needed. Descriptive research would – and the extent to which this varies depending
document what is being done, developing on the organisational context and the nature of
typologies and identifying patterns. Causal research the development intervention being evaluated.
would identify the factors that produce these
patterns. Evaluative research would compare the An example of this type of research was a study
actual performance to explicit standards of of guidance for the development and use of logic
performance. A single research project might models and logframes among development
encompass more than one type of research. For organisations (Wildschut 2014). Manuals and
example, a study of the guidelines that support guidelines from different bilateral and multilateral
and direct impact evaluation within development development agencies and international NGOs
organisations could include descriptive research were compared in terms of the definitions used
that documented the different types of guidance and the nature of the logic models used. The
provided across different organisations, analysed research found 120 different versions of logic
to produce a typology in terms of the research models, which could be grouped into four broad

88 Rogers and Peersman Developing a Research Agenda for Impact Evaluation in Development
Table 1 Types of research into impact evaluation with illustrative research questions

Descriptive – what does it Causal – what are the Evaluative – in what ways
look like? factors that make it like this? and to what extent is it good?

Enabling environment – How is impact evaluation What factors influence how To what extent do guidelines
guidance, requirements, defined in official guidance? prescriptive guidelines are? provide technically correct
policies, formal procedures, What formal and informal advice and prescriptions for
requirements and incentives and disincentives evaluators and evaluation
expectations exist for conducting and commissioners and
using impact evaluation? managers?
Practice – what is done in To what extent are impact What factors influence How effectively do impact
an evaluation evaluations conducted in the level of involvement of evaluations incorporate the
accordance with guidelines? intended beneficiaries in values of intended
What are the strategies impact evaluation decisions beneficiaries?
used to elicit and use the and processes? How valid are reconstructed
values of intended What factors influence or baselines?
beneficiaries in planning facilitate the use of process How credible are causal
and undertaking the tracing in impact inferences made on the
impact evaluation? evaluations? basis of process tracing?
What techniques are
used when baseline data
are not available?
How is process tracing
used for causal inference
when a counterfactual
cannot be constructed?
Products – reports and To what extent are What factors influence How validly do evaluation
other documents produced evaluation reports full disclosure of technical reports present findings?
during an evaluation consistent with guidelines? limitations of impact
What methods of data evaluations?
visualisation are used to Does a focus on reporting
communicate findings? and data visualisation lead
to more or less attention
on the quality of data
collection and analysis?
Impact – influence of report What are the intended Under what conditions does How can evaluation
and process on decisions, and unintended impacts the involvement of intended contribute to social
actions and attitudes of impact evaluation users in the impact betterment?
reports and processes? evaluation process produce
higher engagement and use?
Combined Under what conditions Do narrow definitions of To what extent do impact
are external evaluation impact evaluation evaluation policies affect
teams seen as more (constructed counterfactual) what can be evaluated?
credible than an internal lead to lower investment
team or a hybrid team? in interventions where
this design is not possible?
Do simple messages of
average findings produce
more or less engagement
and support among
decision-makers?

Source Authors’ own.

IDS Bulletin Volume 45 Number 6 November 2014 89


Table 2 Key evaluation tasks organised in seven clusters

Cluster of impact evaluation tasks Specific tasks

1 Manage an evaluation or evaluation – Decide what is to be evaluated


system – Understand and engage stakeholders
– Establish decision-making processes for the evaluation
– Decide who will conduct the evaluation – generally (external, internal,
hybrid) and specifically (choosing an evaluation team)
– Determine and secure resources
– Define ethical and quality evaluation standards
– Document management processes and agreements (e.g. Request for
Proposal, contract)
– Develop planning documents for the evaluation (e.g. evaluation design,
work plan)
– Review evaluation (do meta-evaluation)
– Develop evaluation capacity
2 Define what is to be evaluated – Develop initial description
– Develop programme theory/logic model
– Identify potential unintended results
3 Frame the boundaries for an – Identify primary intended users
evaluation – Decide purpose
– Specify the key evaluation questions
– Determine what ‘success’ looks like
4 Describe activities, outcomes, – Sample
impacts and context – Use measures, indicators or metrics
– Collect and/or retrieve data
– Manage data
– Combine qualitative and quantitative data
– Analyse data
– Visualise data
– Generalise findings
5 Understand causes of outcomes – Check the results support causal inference
and impacts – Compare results to the counterfactual
– Investigate possible alternative explanations
6 Synthesise data from one or more – Synthesise data from a single evaluation
evaluations – Synthesise data across evaluations
7 Report and support use of findings – Identify reporting requirements
– Develop reporting media
– Develop recommendations
– Support use

Source BetterEvaluation.2

types. In some cases, reasons for the variation enabling environment, which are discussed in
were explained in the documentation. Section 5.

The enabling environment includes both formal For example, Coryn et al. (2007) reviewed the
and informal processes, and not all of it will be models and mechanisms for evaluating
visible in formal documentation. Some of it will government-funded research. They examined
be in the form of verbal explanations of ‘the way the processes used in 16 countries where there
things are done here’. This has implications for were sufficient data to undertake the analysis,
the research methods needed to study the and developed a typology of models. A purposive

90 Rogers and Peersman Developing a Research Agenda for Impact Evaluation in Development
sample of judges rated each of the models in looked at the products and the processes used to
terms of 25 indicators related to five criteria: produce them.
validity, utility, credibility, cost-effectiveness
(monetary and non-monetary), and ethicality. 4.4 The impacts of impact evaluation
Scores were then weighted and reported. The intended and actual impacts of impact
evaluation are an essential element of research
4.2 Impact evaluation practice on impact evaluation. This research needs to
While many discussions about impact evaluation address the different ways in which impact
have focused only on designs for causal evaluation is intended to be used. Some impact
inference, impact evaluation practice involves evaluation which aims to discover ‘what works’ is
much more than this. It involves up-front work intended to inform decisions about which
by commissioners of evaluation to decide what interventions to scale up. There is now more
should be the focus of an impact evaluation and interest in learning ‘what works for whom in
how it should be managed. It involves other tasks which circumstances’, using either realist
during the actual evaluation, including selecting evaluation (Pawson and Tilley 1997) and realist
appropriate measures and negotiating the synthesis (Pawson 2002), or more differentiated
criteria, standards and weighting that will be experimental or quasi-experimental designs
used to make evaluative judgements (especially (White 2009). For development interventions
if the impact evaluation is intended to provide a which will work differently in different situations
comparison of alternatives). And, it involves (and this would include many if not most of
activities after an evaluation report is produced, them), impact evaluation needs to also inform
including dissemination and support to use the users how to translate an intervention to other
findings, meta-evaluation, and, in many cases settings, making appropriate adjustments, not
synthesis of findings from multiple evaluations. simply transferring it.
It is helpful to think about this broad scope of
impact evaluation in terms of seven clusters of Research into the intended and actual impacts of
evaluation tasks (see Table 2). impact evaluation needs to be informed by
previous research on evaluation use and
4.3 Impact evaluation products influence, including the extensive research on
Evaluation reports are just one of the products evaluation utilisation (e.g. Patton et al. 1975;
produced by an impact evaluation. Important Cousins and Leithwood 1986; Shulha and Cousins
artefacts are produced at the beginning of the 1997) and more recent research into different
process which may include: the rationale for ways in which evaluation can and does have an
undertaking an impact evaluation of this impact (e.g. Valovirta 2002; Mark and Henry
intervention at this time; the terms of reference, 2004). It should also take account of the ways in
scope of work or request for proposal produced to which impact evaluation can influence the
brief potential evaluators; the proposals they different types of processes involved in
develop in response, often outlining a design for implementation, including formal decisions,
the evaluation; an inception report developed as policies and processes, street-level bureaucrat
a first deliverable, sometimes including a revised ‘workarounds’, devolved decision-making in small
design. During the evaluation, interim and groups, conflict and bargaining and responding to
progress reports are produced and at the end, in chance and chaos (as outlined by Elmore 1978,
addition to a final report, there can be policy and its implications for evaluation practice and
briefs, briefing notes, audiovisual versions of the research explored by Rogers and Hough 1995).
report and social media reporting.
The actual impacts of impact evaluation are not,
Documenting, describing and analysing these however, always positive, and research needs to
products would not be easy, since many would be be able to explore unintended negative impacts
internal documents or subject to commercial-in- such as loss of trust (e.g. Schwarz and
confidence restrictions. Overcoming these Struhkamp 2007), the damage done to
barriers would provide useful evidence of the communities through intrusive, time-consuming
different formats and contents of these products data extraction (e.g. Jayawickrama 2013), goal
as well as evidence of their quality. It would be displacement and data corruption in situations of
particularly useful to undertake research which high stakes evaluation (Perrin 2002).

IDS Bulletin Volume 45 Number 6 November 2014 91


Figure 1 Journal articles as a sub-set of the universe of evaluation practice

Universe of evaluation
practice

Documented
information about evaluation

Publicly accessible
accounts of evaluation

Journal articles
about evaluation

Source Authors’ own.

5 How impact evaluation should be researched information, and their interpretation of that
5.1 Options for collecting and analysing data about information, can be affected by their lack of
impact evaluation knowledge, lack of self-awareness, and in some
A research agenda would not only focus attention cases, deliberate deception.
on specific research topics but also on the types
of research that are needed to investigate impact While Douglas’ research was into various forms
evaluation for development in terms of its of deviance, he has argued convincingly that
enabling environment, practice, products and similar issues arise in research into more
impacts. There are a range of methods that conventional organisations:
should be used beyond those commonly used in
published research – surveys of evaluators and The researcher can expect that in certain
analysis of published journal articles about settings, the members will misinform him,
evaluation. The Elmore (1978)/Rogers and evade him, lie to him. This would be true in
Hough (1995) framework raises particular issues organised, ostensibly rationalised settings, like
about their suitability. If evaluators and bureaucracies. And it is precisely those who
programme staff are acting as ‘street-level are most knowledgeable about these kinds of
bureaucrats’, then they might well be reluctant problems, the managers and the
to disclose their non-compliance with official organisational entrepreneurs, who will do
processes and requirements. If ‘conflict and most to keep him from learning about the
bargaining’ processes are important, where conflicts, contradictions, inconsistencies, gaps
different parts of an organisation are engaged in and uncertainties. The reason for this is
conflict over scarce resources, and collaboration simply that they are the ones responsible for
is about temporary advantage rather than long- making things rational, organised, scientific,
term commitment to shared goals, then their and legal’ (Douglas 1976: 91–2).
assessments of the success or failure of the
evaluation will be filtered through these Research into impact evaluation needs to learn
perspectives, and probably not willingly from research into other complex organisational
disclosed. phenomena and use a combination of methods,
with a particular emphasis on theory-informed
The advice from Douglas (1976) is appropriate case studies. The following sections outline some
here. He reminds us that it may be unwise to of the types of research methods that can be
analyse data as if all respondents are only trying used and issues to be addressed when choosing
to communicate their perfect knowledge of a and using them to study impact evaluation.
situation to the researcher. Our informants’

92 Rogers and Peersman Developing a Research Agenda for Impact Evaluation in Development
5.2 Literature reviews and systematic reviews organisations have been based on such low
Published materials can be a useful source of response rates that there is real concern about
evidence about impact evaluation but care is their suitability to provide a picture of the state
needed in both data collection and analysis. of practice. For example, the Innovation
Formal documents are most appropriate for Network’s study of evaluation practice and
providing evidence about formal processes, such capacity in the non-profit sector (Reed and
as guidance and systems. There is still a need to Moriaru 2010) reported over 1,000 survey
ensure that documents are representative. For responses but used a volunteer sample with a
example, Wildschut’s (2014) review of guidance response rate of 2.97 per cent and a profile of
on logic models and logframes involved an organisations very different to the target
exhaustive search for grey literature. population. Even where response rate is not a
problem, there remains the challenges of self-
In recent years, there have been a number of disclosure and self-awareness, as well as the
studies which have been labelled as a systematic question as to whether the person who completes
review of some aspects of evaluation practice but the survey is able to speak on behalf of the
which have been severely limited in their scope – organisation.
only including journal articles and books – but
then drawing conclusions about the state of 5.4 Conference presentations
evaluation practice. For example, Coryn et Conference presentations can be highly variable
al.(2011) claimed to have conducted a systematic as sources of evidence about evaluation practice
review of theory-driven evaluation, but, for and impacts. What is very often presented at
reasons not explained or justified, restricted the evaluation conferences and in evaluation
search to ‘traditional, mainstream scholarly journals are descriptions of one’s own practice
outlets including journals and books’, ‘excluding based on poor documentation and in an
other sources including doctoral dissertations, environment where there are significant
technical reports, background papers, white incentives to appear competent, minimise the
papers, and conference presentations and problems, and to make things look neater than
proceedings’ (p. 208). Journal articles represent the real messy process. There can be barriers to
a small and unrepresentative sample of disclosure at professional meetings where people
evaluation practice, as illustrated in Figure 1. are also seeking employment and engagement.

An earlier comparison of theory-based Some evaluation conferences have, however,


evaluations reported in academic journals and managed to provide an environment where
books with those reported in conference people admit challenges and gaps. For example,
presentations and evaluation reports (Rogers et the American Evaluation Association had a
al. 2000) had demonstrated how these are often series of sessions on ‘Stories of Evaluation’ (e.g.
quite different – the former dominated by Scriven and Rogers 1995) where people were
academics and by successful evaluations, and the encouraged to share stories of practice. These
latter by evaluation consultants and including could be further developed along the lines of the
more descriptions of unsuccessful evaluations. different types of ethnographic stories outlined
Ignoring the larger group omits many of the by Van Maanen (2011): realist; confessional;
larger evaluations and a wide diversity of impressionist; and critical.
practice and risks making claims about the state
of evaluation practice that are not accurate 5.5 Simulations and experiments
and/or missing the opportunity to learn from Sometimes, it is possible to construct formal
more detailed accounts than are published in tests of different evaluation practices.
academic journals.
Simulation studies have difficulty simulating the
5.3 Surveys of evaluators group context within which programmes are
Studying evaluation practice and impact through implemented; most simulation studies on
surveys has superficial appeal, especially to evaluation utilisation have therefore focused on
graduate students or other researchers seeking the stage where an individual processes the
to quickly produce findings, but have serious evaluation information (e.g. Braskamp, Brown
problems. Many surveys of evaluators or and Newman 1982) or makes a decision. It is

IDS Bulletin Volume 45 Number 6 November 2014 93


Table 3 Questions about how impact evaluation is undertaken

Aspect of impact evaluation Research question Possible research approach

Develop or use appropriate measures What are adequate indicators of Review of indicators in use and the
and indicators important variables which cannot be understanding of the situation by
actually measured? those using the indicators; peer
review of those using the indicators;
prize for best indicator in terms of
utility and feasibility for a particular
challenging outcome
Develop programme theory How can a theory of change/logic Identification of examples in actual
model usefully represent complex evaluation reports, documentation
aspects (uncertainty, emergence)? of process to develop them and
usefulness; competition to develop
new types of logic models for
specific scenarios
How can an organisation support As above
projects to have locally specific
theories of change/programme
theory that are still coherent across
a programme, an organisation or a
sector?
Identify potential unintended results What are effective strategies for Trials of negative programme
identifying potential unintended theory3 methods with concurrent
results – and for negotiating their documentation of micro-detail of
inclusion in the scope of the facilitation
evaluation?

Source Authors’ own.

difficult for simulation studies to provide enough 1990) would be useful. An illustrative case study
context for people to enter into a realistic would be descriptive, providing in-depth
process – and people may respond differently examples. This could be very useful as a guide
when they actually have an important stake in for practice, or to develop a typology of practice.
the evaluation. Purposeful sampling of the case, including
selecting a typical case, would be appropriate. An
More recently, a randomised controlled trial exploratory case study is designed to generate
(RCT) was undertaken to test the effectiveness hypotheses for later investigation. Particularly
of different types of policy briefs (Beynon et al. successful or problematic evaluations might be a
2012). A large volunteer sample from various rich source of new ideas about barriers and
networks on policy and evidence was randomly enablers to good practice or impacts. A critical
assigned to receive one of three different instance case study might focus on a particular
versions of a policy brief, and their responses evaluation, or even a particular event or site or
were investigated through an online interaction within an evaluation which provides a
questionnaire, supplemented by telephone single instance of unique interest (for example,
interviews with a sub-sample of each group. documenting an innovation) or serves as a
critical test of an assertion (for example,
5.6 Systematic, rich case studies demonstrating that something is possible). A
Systematic case studies seem likely to provide programme implementation case study would
the most useful evidence about impact examine how evaluation is being implemented,
evaluation in terms of enabling environment, particularly in relation to internal or external
practice, products and impact – and how these standards. A programme effects case study would use
are linked. Different types of case studies (GAO systematic non-experimental causal inference

94 Rogers and Peersman Developing a Research Agenda for Impact Evaluation in Development
Table 4 Questions about the impact of particular impact evaluation methods

Aspect of impact evaluation Research question Possible research approach

Identify appropriate measures and Does an emphasis on algorithmic Interviews with decision-makers
indicators interpretation of evidence-based policy using evidence in the form of a
(in the form of identifying ‘what works’ single metric to explore the level
in terms of a single metric and then of their attention to issues of
ranking alternatives in magnitude of heterogeneity and equity
effectiveness or cost-effectiveness)
lead to less consideration of equity
issues and the implementation of
interventions that reduce equity?
Collect or curate data What are the conditions when big Review of existing examples of big
data can provide useful information data use to develop a typology of
for impact evaluation? conditions; trials of using big data
on a specific challenge
Understand causal attribution and How can systematic non-experimental Identify, document and review
contribution strategies for causal inference be used existing examples; trial approaches
and communicated effectively? with input from research methods
specialists working with evaluators
Does a requirement for constructed Interviews with evaluation users
counterfactuals in impact evaluation
lead to less investment in system-level
interventions?

Source Authors’ own.

methods, such as process tracing or comparative The process of identifying good examples to
case study, to draw conclusions about what had document and analyse for case studies could
produced the observed practices, products and include the winning evaluations of awards and
impacts of the evaluations. prizes offered by various evaluation associations
which have been seen to have been of high
Documenting what is done and what happens can quality. Another possible research method for
use a mix of anthropological methods and cross- identifying and investigating successful cases
case comparisons. Despite the concerns about the would be positive deviance (Richard, Sternin and
limitations of self-reported practice, documented Sternin 2010), which involves identifying rare
practice will be an important source of knowledge. cases of success and investigating what they are
This would include reviewing existing reports of doing differently. What is particular is that the
practice and creating new documentation. This people involved in doing that enquiry are the
could proceed retrospectively – identifying good people who want to use that knowledge. This
practice that has happened and reconstructing might be evaluators, seeking to learn from other
what happened and why. It could involve evaluators who have conducted good impact
concurrent documentation – identifying particular evaluations, despite challenges, or it might be
challenges and following alongside different ways evaluation commissioners and users seeking to
of addressing them. It could involve documenting learn from other commissioners and users.
the processes used and the micro-interactions
within an evaluation team and with other Cases of low-quality impact evaluation, which
stakeholders. Research into the practices of highly could provide useful illustrations of problems
skilled evaluators and evaluation managers could and/or unsuccessful strategies, might be
develop examples and eventually typologies of identified through crowd sourcing. For example,
strategies used to effectively undertake each of an enquiry to the discussion list XCeval asking
the many tasks (previously outlined in the seven for examples of ‘ineffective evaluations’
clusters) involved in an impact evaluation. prompted 14 responses and candid discussion of

IDS Bulletin Volume 45 Number 6 November 2014 95


Table 5 Questions about impact evaluation politics and infrastructure

Aspect of impact evaluation Research question Possible research approach

Choose what to evaluate What investments and activities are Review of formal records of impact
the subjects of evaluation? On the evaluations (where available); survey
basis of what evidence are decisions of evaluators
made about other investments and
activities?
What opportunities exist for funding Review of public interest research
public interest impact evaluation rather examples
than only funder-controlled evaluation?
Develop key evaluation questions What are effective processes to identify Detailed documentation and
potential key evaluation questions and analysis of meeting processes to
prioritise these to a feasible list? negotiate questions
Supporting use How can the utility of an evaluation be Identify, document and analyse
preserved when the primary intended existing examples
users leave before it is completed?
Reporting findings How can reports communicate clearly User testing of alternative reports
without oversimplifying the findings,
especially when there are important
differences in results?
Manage evaluation What procurement processes support Interview evaluation managers,
effective selection and engagement of evaluators, and contract managers
an evaluation team and effective about their processes, the impact of
management of the evaluation project? these and the rationale for them,
develop a typology of issues and
options
Evaluation capacity development Why does so much evaluation fail to be Interview with evaluation managers
informed by what is known about about their knowledge and sources
effective evaluation? of knowledge about evaluation
management

Source Authors’ own.

the problems in these evaluations (Luo and Liu the evaluation of school leadership; Cranston,
2014). For case studies of weak impact evaluations, Beynon and Rowley 2013 described an evaluation
de-identification might well be necessary. from the different perspectives of the evaluator
and the evaluation commissioner).
A writeshop process, either face-to-face or
virtual, can be one way to support retrospective 5.7 Trials of methods
documentation and development of detailed case Formal trials of new methods, designs or processes
studies. This involves a combination of writing by would be an important type of research to support.
one or more people associated with an evaluation This would require identifying either a promising
(often the evaluation team but in some cases the method and finding a potentially appropriate
commissioner as well) with structured editing situation to use it – or identifying a common
and peer review. Such writeshops can provide a challenge and finding a combination of methods
structure for the cases which examine and or processes to address it. This could take the
articulate aspects of their practice they had not form of a trial where skilled users of methods
previously thought of, and were certainly not apply them to a specific evaluation, with not only
reported in the methodology section of an documentation but also follow-up evaluation of
evaluation report (for example, Oakden 2013 the validity, utility, feasibility and propriety of
provided a detailed account of using rubrics in the evaluation (e.g. Rogers and McDonald 2001).

96 Rogers and Peersman Developing a Research Agenda for Impact Evaluation in Development
This research could be undertaken through a call research questions, grouped in terms of the
for proposals (where a specific trial is proposed), different aspects of an impact evaluation. While
through a matching process (where an they are all genuine research questions, which
evaluation site and a methodologist are paired could contribute to improving impact evaluation
up where supply and demand match), or through in and for development, they have also been
a competitive approach to research and chosen to illustrate different types of research
development where applicants produce proposals approaches that could be used.
which are competitively assessed – with the prize
including actual implementation of the plan. 7 Conclusion
The development of a formal research agenda
These trials could include longitudinal studies of will require a consultative process of identifying
the use and impact of evaluation, systematically those who might contribute or benefit in various
investigating the extent and nature of impact ways to identify needs, priorities and
from the findings and the processes of evaluation. opportunities. It needs sufficient resources. And,
it needs the right combination of creative
If one of the objectives of this research is trying abrasion and interdisciplinary cooperation.
to improve impact evaluation within a particular
government, this approach would involve The range of possible research questions is large.
working with them to identify an example of a The scope for fieldwork and subsequent uptake is
good impact evaluation, then to find out how also large. Researchers from a number of
they managed to achieve that and then, explore different disciplines will be needed to do this
whether their practices might be transferable. well. This interdisciplinary ‘creative abrasion’
This approach suggests a fundamental shift in can help to surface assumptions about evaluation
how the research would be done from researcher- which will add to the value of the research.
led to intended user-led.
Increasing efforts at international collaboration,
6 Examples of important research questions including special events around the International
about impact evaluation and how they might be Year of Evaluation in 2015, could provide both
answered resources, networking opportunities and impetus
To illustrate what these different ideas might to formalise the research agenda and proceed to
look like in practice, Tables 3–5 set out some fund its implementation over a number of years.

Notes 3 Most programme theories show how an


1 The process was conducted in Bolivia, intervention is expected to contribute to
Botswana, the DRC, India, Kenya, Lesotho, positive impacts. Negative programme theory
Mozambique, Namibia, Nicaragua, Papua shows how it might produce negative impacts.
New Guinea, Rwanda, South Africa, Tanzania, See http://betterevaluation.org/evaluation-
Thailand, Uganda and Uruguay. options/negative_program_theory.
2 www.betterevaluation.org (accessed 30 July 2014).

References Coryn, C.L.; Noakes, L.A.; Westine, C.D. and


Beynon, P.; Chapoy, C.; Gaarder, M. and Masset, E. Schröter, D.C. (2011) ‘A Systematic Review of
(2012) What Difference does a Policy Brief Make?, Theory-Driven Evaluation Practice from 1990
Full Report of an IDS, 3ie, Norad study, to 2009’, American Journal of Evaluation 32.2:
Brighton: IDS 199–226
Braskamp, L.A.; Brown, R.D. and Newman, D.L. Cousins, J.B. and Leithwood, K.A. (1986)
(1982) ‘Studying Evaluation Utilization Through ‘Current Empirical Research on Evaluation
Simulations’, Evaluation Review 6.1: 114–26 Utilization’, Review of Educational Research 56.3:
Coryn, C.L.; Hattie, J.A.; Scriven, M. and 331–64
Hartmann, D.J. (2007) ‘Models and Cranston, P.; Beynon, P. and Rowley, J. (2013) Two
Mechanisms for Evaluating Government- Sides of the Evaluation Coin: An Honest Narrative
Funded Research. An International Co-Constructed by the Commissioner and the
Comparison’, American Journal of Evaluation Contractor Concerning One Evaluation Experience,
28.4: 437–57 Melbourne: BetterEvaluation,

IDS Bulletin Volume 45 Number 6 November 2014 97


http://betterevaluation.org/resource/example/ Perrin, B. (2002) Implementing the Vision: Addressing
two-sides-coin (accessed 30 July 2014) Challenges to Results-Focused Management and
Douglas, J. (1976) Investigative Social Research. Budgeting, in OECD meeting Implementation
Individual and Team Field Research, Beverly Challenges in Results Focused Management
Hills CA: Sage Publications and Budgeting, 11–12 February 2002, Paris
Elmore, R.F. (1978) ‘Organizational Models of Reed, E. and Moriaru, J. (2010) State of
Social Program Implementation’, Public Policy Evaluation: Evaluation Practice and Capacity in the
26.2: 185 Nonprofit Sector, Washington DC: Innovation
GAO (1990) Case Study Evaluation, Washington Network
DC: Government Accounting Office Richard P.; Sternin, J. and Sternin, M. (2010) The
Jayawickrama, J. (2013) ‘“If They Can’t Do Any Power of Positive Deviance: How Unlikely Innovators
Good, They Shouldn’t Come”: Northern Solve the World’s Toughest Problems,
Evaluators in Southern Realities’, Journal of Cambridge MA: Harvard Business Press
Peacebuilding and Development 8.2: 26–41 Rogers, P.J. and Hough, G.I. (1995) ‘Improving
Luo, L.P. and Liu, L. (2014) ‘Reflections on the Effectiveness of Evaluations: Making the
Conducting Evaluations for Rural Link to Organizational Theory’, Evaluation and
Development Interventions in China’, Program Planning 18.4: 321–32
Evaluation and Program Planning 47: 1–8 Rogers, P. and McDonald, B. (2001) Impact
Mark, M.M. and Henry, G.T. (2004) ‘The Evaluation Research Project, Melbourne:
Mechanisms and Outcomes of Evaluation Department of Natural Resources and
Influence’, Evaluation 10.1: 35–57 Environment
MOHSS/DSP (2010) Proceedings. Stakeholders Rogers, P.J.; Petrosino, A.; Huebner, T.A. and
Workshop on ‘Setting a National Evaluation and Hacsi, T.A. (2000) ‘Program Theory
Research Agenda for HIV/AIDS in Namibia’, Evaluation: Practice, Promise, and Problems’,
Windhoek: Ministry of Health and Social New Directions for Evaluation 2000.87: 5–13
Services, Directorate of Special Programs Schwarz, C. and Struhkamp, G. (2007) ‘Does
(MOHSS/DSP) Evaluation Build or Destroy Trust? Insights
NAMc (2010) Proceedings. Consultative Workshop on from Case Studies on Evaluation in Higher
‘Developing a National Evaluation Agenda for Education Reform’, Evaluation 13.3: 323–39
HIV/AIDS in Thailand’, Cha-Am: National Scriven, M. and Rogers, P.J. (1995) ‘Stories of
AIDS Management Center Evaluation’, panel presentation at the 1995
Oakden, J. (2013) Evaluation Rubrics: How to International Evaluation Conference on
Ensure Transparent and Clear Assessment that Evaluation for a New Century: A Global
Respects Diverse Lines of Evidence, Melbourne: Perspective, American Evaluation Association
BetterEvaluation, http://betterevaluation.org/ co-sponsored with the Canadian Evaluation
sites/default/files/Evaluation%20rubrics.pdf Society, Vancouver, British Columbia, Canada,
(accessed 5 August 2014) November 1995
OEDC-DAC (2010) Glossary of Key Terms in Shulha, L.M. and Cousins, J.B. (1997)
Evaluation and Results Based Management, Paris: ‘Evaluation Use: Theory, Research, and
OEDC, www.oecd.org/development/peer- Practice Since 1986’, American Journal of
reviews/2754804.pdf (accessed 5 August 2014) Evaluation 18.3: 195–208
Patton, M.Q.; Grimes, P.S.; Guthrie K.M.; Smith, J. and Sweetman, A. (2009) ‘Putting the
Brennan, N.J.; French, B.D. and Blyth, D.A. Evidence into Evidence-Based Policy’, in
(1975) In Search of Impact: An Analysis of the Productivity Commission (eds), Strengthening
Utilization of Federal Health Evaluation Research, Evidence-based Policy in the Australian Federation,
Minneapolis and St Paul: University of Canberra, 17–18 August 2009, Volume 1: Roundtable
Minnesota Proceedings, Canberra: Australian Government
Pawson, R. (2002) Evidence-Based Policy: The Promise USAID (2011) USAID Evaluation Policy,
of Realist Synthesis, London: Sage Publications Washington DC: USAID, www.usaid.gov/sites/
Pawson, R. and Tilley, N. (1997) Realist default/files/documents/2151/USAIDE
Evaluation, London: Sage Publications valuationPolicy.pdf (accessed 5 August 2014)
Peersman, G. (2010) A National Evaluation Agenda Valovirta, V. (2002) ‘Evaluation Utilization as
for HIV. UNAIDS Monitoring and Evaluation Argumentation’, Evaluation 8.1: 60–80
Fundamentals, Geneva: UNAIDS Van Maanen, J. (2011) Tales of the Field: On Writing

98 Rogers and Peersman Developing a Research Agenda for Impact Evaluation in Development
Ethnography, 2nd ed., Chicago and London: Wildschut, L.P. (2014) ‘Theory-Based Evaluation,
University of Chicago Press Logic Modelling and the Experience of South
White, H. (2009) Theory-based Impact Evaluation: African Non-Governmental Organisations’,
Principles and Practice, 3ie Working Paper 3, PhD dissertation, South Africa: Stellenbosch
www.3ieimpact.org/media/filer_public/2012/05/ University
07/Working_Paper_3.pdf (accessed 5 August
2014)

IDS Bulletin Volume 45 Number 6 November 2014 99

You might also like