Developing A Research Agenda For Impact
Developing A Research Agenda For Impact
Developing A Research Agenda For Impact
Abstract This article sets out what would be required to develop a research agenda for impact evaluation.
It begins by explaining why it is needed and what process it would involve. It outlines four areas where
research is needed – the enabling environment, practice, products and impacts. It reviews the different
research methods that can be used to research impact evaluation and argues for particular attention to
detailed, theory-informed, mixed-method comparative case studies of the actual processes and impacts of
impact evaluation. It explores some examples of research questions that would be valuable to focus on and
how they might be addressed. Finally, it makes some suggestions about the process that is needed to create
a formal and collaborative research agenda.
85
for factors other than the intervention that are also engaged in delivery. Last but not least, it
might account for the observed change needs to include impact evaluations that treat
(USAID 2011: 1). communities as users of evaluations, drivers of
development effectiveness and agents of
In this article, impact evaluation covers any evaluation themselves.
evaluation which assesses actual or likely impacts
– the Organisation for Economic Co-operation The research agenda needs to include all scales
and Development-Development Assistance of intervention – not only individual projects, but
Committee (OECD-DAC) defines impacts as also programmes, multiple projects as part of a
‘positive and negative, primary and secondary single programme, strategies and policies. It
long-term effects produced by a development should also include impact evaluations that look
intervention, directly or indirectly, intended or at when particular intervention types are
unintended’ (OECD-DAC 2010: 24). This suitable – projects which aim to catalyse or
implies that an impact evaluation has to address coordinate, impact investment, pay-for-
longer term results, but not necessarily directly; performance, or capacity development.
it could use other data to make links to likely
longer term results, and include ex ante and ex post 2.2 The need for a research agenda on impact evaluation
facto impact evaluation. What is particular about Impact evaluation can make an important
impact evaluation is that it seeks causal contribution to development. The results can
inference, understanding the role of particular inform decisions about what to invest in, and in
interventions in producing change. This essential what situations, and how to adapt successful
characteristic of impact evaluation determines projects, programmes and policies for new
its importance as a public good in terms of situations. Evidence of effective development
producing evidence of ‘What works?’ and ‘What interventions can be used to advocate for
works for whom in what contexts?’ continuing or increased funding, especially in a
climate of increasing scepticism about the value
By development, we are referring not only to of international aid. The process of impact
projects funded by international aid but also to evaluation can improve communication between
programmes, projects, policies and strategies stakeholders and focus attention on results. It
that are funded through various means with the can support the principles of effective
aim of improving health and welfare. development, including partnerships and local
agency. But impact evaluation can also harm
Some of the conceptual maps of impact evaluation development. Poor quality impact evaluation
in development have only included particular (either using methods and processes poorly, or
types of development, particular types of impact using inappropriate ones) can provide invalid,
evaluation, and particular aspects of impact misleading or overly simplified findings. These
evaluations. Much of the discussion has focused on can lead to poor decisions, such as scaling up
causal inference methods in experimental or interventions that are ineffective or harmful, or
quasi-experimental impact evaluation of discrete that are implemented in situations where they
donor-funded aid projects in order to inform don’t have a chance to work. Poor quality impact
decisions about scaling up interventions that have evaluation processes can undermine
been found to be effective. developmental processes, reinforcing power
disparities and reducing accountability to
A research agenda on impact evaluation needs to communities.
include the larger map of development – not just
donor-funded projects, but country-led Concerns about the impact of poor quality
programmes and policies, public–private impact evaluation have led to vigorous and
partnership projects and civil society development sometimes vitriolic debates about appropriate
interventions. Consequently, it needs to include methods for impact evaluation. For example, at a
impact evaluations for a range of different users – symposium on evidence-based policy, the
donors and national governments are important, alternatives to experimental and quasi-
but also the decentralised level of government experimental designs were summarised as
responsible for implementation, non-governmental performance measures, customer satisfaction
organisations (NGOs) and the private sector who and ‘charlatans’ (Smith and Sweetman 2009: 85).
86 Rogers and Peersman Developing a Research Agenda for Impact Evaluation in Development
However, recommendations for practice have transparent, standards-based and consultative
rarely been based on systematic and empirical process was then used to identify key information
evidence. It is difficult to secure funding for gaps and to prioritise evaluation studies.
research into evaluation, and there are few
incentives for organisations to collaborate on the Bringing users of evaluation findings and
sorts of research that would be needed. An evaluators together helped to ensure that
international research agenda for impact selected studies were pertinent to the decision-
evaluation would help to build a much needed making needs within the national AIDS
evidence base for more effective and appropriate programme (at all implementation levels) rather
impact evaluation. The research agenda could than just serving the needs of research
provide a focus for research and an impetus and institutions, evaluators or funders. It also helped
incentive for joint research across the various to identify where common interests could be
sectors, disciplines and organisations involved in galvanised and unnecessary duplication avoided.
impact evaluation of development. It would help There was also more synergy between new and
to secure commitment and resources for completed evaluation studies and a greater
research and to prioritise where these might be willingness to share evaluation findings. A clear
applied best. It could support agreements about rationale and a costed plan for the
priority areas for research and appropriate implementation of prioritised studies helped to
methods for doing this research, and help to mobilise the funding needed (NAMc 2010).
make better use of the research that is done by
supporting synthesis and dissemination. Understanding what was already known (and
thus, where important information gaps exist)
3 What is a research agenda and how should it was an essential preparatory step in helping to
be developed? decide evaluation priorities. However, it proved a
To be effective, a research agenda cannot simply time-consuming and challenging task as the
be a wish list developed by researchers, nor an information was often scattered and not always
ambit claim developed by a self-selected group. available in the public domain. Hence, sufficient
It needs to be inclusive, transparent and resources and time need to be provided to do this
defensible. To maximise uptake of the findings, it step well.
needs to encompass strategies and processes for
engaging intended end users in the research Involving a range of different stakeholders with
process, including in the process of identifying different interests, understandings and/or
and deciding research priorities. capacities for evaluation required consensus-
building as well as capacity development. These
Some recent examples from the public health additional efforts allowed for the perspectives of
arena may provide useful insights in terms of different stakeholders to be heard and
what is needed to get to a research agenda on appropriately accommodated. It was particularly
impact evaluation in development (see, for important to conduct the prioritisation of
example, MOHSS/DSP 2010; NAMc 2010; evaluation studies in a transparent manner and
Peersman 2010). In response to a call for an according to agreed criteria such as, for example,
increased focus on programme evaluation to the following considerations:
improve national HIV responses, the Joint United
Nations Programme on HIV/AIDS (UNAIDS) 1 The study needs to address an important data
supported governments1 in the implementation of gap for improving the national AIDS programme:
a national evaluation agenda for HIV. The first step important – the potential for impact of the
in the process was to develop a national findings is high; addresses ‘need to know’ not
evaluation strategy describing the rationale and ‘nice to know’
objectives for targeted programme evaluations data gap – the question cannot already be
and the procedures and infrastructure for answered by existing studies, available data
coordination and management of the studies. or information
Formal agreements build on existing roles and programme improvement – the evaluation provides
responsibilities rather than setting up parallel information on what can be done better in
systems and capitalise on the comparative terms of programme implementation,
strengths of different organisations involved. A effectiveness, and/or efficiency;
88 Rogers and Peersman Developing a Research Agenda for Impact Evaluation in Development
Table 1 Types of research into impact evaluation with illustrative research questions
Descriptive – what does it Causal – what are the Evaluative – in what ways
look like? factors that make it like this? and to what extent is it good?
Enabling environment – How is impact evaluation What factors influence how To what extent do guidelines
guidance, requirements, defined in official guidance? prescriptive guidelines are? provide technically correct
policies, formal procedures, What formal and informal advice and prescriptions for
requirements and incentives and disincentives evaluators and evaluation
expectations exist for conducting and commissioners and
using impact evaluation? managers?
Practice – what is done in To what extent are impact What factors influence How effectively do impact
an evaluation evaluations conducted in the level of involvement of evaluations incorporate the
accordance with guidelines? intended beneficiaries in values of intended
What are the strategies impact evaluation decisions beneficiaries?
used to elicit and use the and processes? How valid are reconstructed
values of intended What factors influence or baselines?
beneficiaries in planning facilitate the use of process How credible are causal
and undertaking the tracing in impact inferences made on the
impact evaluation? evaluations? basis of process tracing?
What techniques are
used when baseline data
are not available?
How is process tracing
used for causal inference
when a counterfactual
cannot be constructed?
Products – reports and To what extent are What factors influence How validly do evaluation
other documents produced evaluation reports full disclosure of technical reports present findings?
during an evaluation consistent with guidelines? limitations of impact
What methods of data evaluations?
visualisation are used to Does a focus on reporting
communicate findings? and data visualisation lead
to more or less attention
on the quality of data
collection and analysis?
Impact – influence of report What are the intended Under what conditions does How can evaluation
and process on decisions, and unintended impacts the involvement of intended contribute to social
actions and attitudes of impact evaluation users in the impact betterment?
reports and processes? evaluation process produce
higher engagement and use?
Combined Under what conditions Do narrow definitions of To what extent do impact
are external evaluation impact evaluation evaluation policies affect
teams seen as more (constructed counterfactual) what can be evaluated?
credible than an internal lead to lower investment
team or a hybrid team? in interventions where
this design is not possible?
Do simple messages of
average findings produce
more or less engagement
and support among
decision-makers?
Source BetterEvaluation.2
types. In some cases, reasons for the variation enabling environment, which are discussed in
were explained in the documentation. Section 5.
The enabling environment includes both formal For example, Coryn et al. (2007) reviewed the
and informal processes, and not all of it will be models and mechanisms for evaluating
visible in formal documentation. Some of it will government-funded research. They examined
be in the form of verbal explanations of ‘the way the processes used in 16 countries where there
things are done here’. This has implications for were sufficient data to undertake the analysis,
the research methods needed to study the and developed a typology of models. A purposive
90 Rogers and Peersman Developing a Research Agenda for Impact Evaluation in Development
sample of judges rated each of the models in looked at the products and the processes used to
terms of 25 indicators related to five criteria: produce them.
validity, utility, credibility, cost-effectiveness
(monetary and non-monetary), and ethicality. 4.4 The impacts of impact evaluation
Scores were then weighted and reported. The intended and actual impacts of impact
evaluation are an essential element of research
4.2 Impact evaluation practice on impact evaluation. This research needs to
While many discussions about impact evaluation address the different ways in which impact
have focused only on designs for causal evaluation is intended to be used. Some impact
inference, impact evaluation practice involves evaluation which aims to discover ‘what works’ is
much more than this. It involves up-front work intended to inform decisions about which
by commissioners of evaluation to decide what interventions to scale up. There is now more
should be the focus of an impact evaluation and interest in learning ‘what works for whom in
how it should be managed. It involves other tasks which circumstances’, using either realist
during the actual evaluation, including selecting evaluation (Pawson and Tilley 1997) and realist
appropriate measures and negotiating the synthesis (Pawson 2002), or more differentiated
criteria, standards and weighting that will be experimental or quasi-experimental designs
used to make evaluative judgements (especially (White 2009). For development interventions
if the impact evaluation is intended to provide a which will work differently in different situations
comparison of alternatives). And, it involves (and this would include many if not most of
activities after an evaluation report is produced, them), impact evaluation needs to also inform
including dissemination and support to use the users how to translate an intervention to other
findings, meta-evaluation, and, in many cases settings, making appropriate adjustments, not
synthesis of findings from multiple evaluations. simply transferring it.
It is helpful to think about this broad scope of
impact evaluation in terms of seven clusters of Research into the intended and actual impacts of
evaluation tasks (see Table 2). impact evaluation needs to be informed by
previous research on evaluation use and
4.3 Impact evaluation products influence, including the extensive research on
Evaluation reports are just one of the products evaluation utilisation (e.g. Patton et al. 1975;
produced by an impact evaluation. Important Cousins and Leithwood 1986; Shulha and Cousins
artefacts are produced at the beginning of the 1997) and more recent research into different
process which may include: the rationale for ways in which evaluation can and does have an
undertaking an impact evaluation of this impact (e.g. Valovirta 2002; Mark and Henry
intervention at this time; the terms of reference, 2004). It should also take account of the ways in
scope of work or request for proposal produced to which impact evaluation can influence the
brief potential evaluators; the proposals they different types of processes involved in
develop in response, often outlining a design for implementation, including formal decisions,
the evaluation; an inception report developed as policies and processes, street-level bureaucrat
a first deliverable, sometimes including a revised ‘workarounds’, devolved decision-making in small
design. During the evaluation, interim and groups, conflict and bargaining and responding to
progress reports are produced and at the end, in chance and chaos (as outlined by Elmore 1978,
addition to a final report, there can be policy and its implications for evaluation practice and
briefs, briefing notes, audiovisual versions of the research explored by Rogers and Hough 1995).
report and social media reporting.
The actual impacts of impact evaluation are not,
Documenting, describing and analysing these however, always positive, and research needs to
products would not be easy, since many would be be able to explore unintended negative impacts
internal documents or subject to commercial-in- such as loss of trust (e.g. Schwarz and
confidence restrictions. Overcoming these Struhkamp 2007), the damage done to
barriers would provide useful evidence of the communities through intrusive, time-consuming
different formats and contents of these products data extraction (e.g. Jayawickrama 2013), goal
as well as evidence of their quality. It would be displacement and data corruption in situations of
particularly useful to undertake research which high stakes evaluation (Perrin 2002).
Universe of evaluation
practice
Documented
information about evaluation
Publicly accessible
accounts of evaluation
Journal articles
about evaluation
5 How impact evaluation should be researched information, and their interpretation of that
5.1 Options for collecting and analysing data about information, can be affected by their lack of
impact evaluation knowledge, lack of self-awareness, and in some
A research agenda would not only focus attention cases, deliberate deception.
on specific research topics but also on the types
of research that are needed to investigate impact While Douglas’ research was into various forms
evaluation for development in terms of its of deviance, he has argued convincingly that
enabling environment, practice, products and similar issues arise in research into more
impacts. There are a range of methods that conventional organisations:
should be used beyond those commonly used in
published research – surveys of evaluators and The researcher can expect that in certain
analysis of published journal articles about settings, the members will misinform him,
evaluation. The Elmore (1978)/Rogers and evade him, lie to him. This would be true in
Hough (1995) framework raises particular issues organised, ostensibly rationalised settings, like
about their suitability. If evaluators and bureaucracies. And it is precisely those who
programme staff are acting as ‘street-level are most knowledgeable about these kinds of
bureaucrats’, then they might well be reluctant problems, the managers and the
to disclose their non-compliance with official organisational entrepreneurs, who will do
processes and requirements. If ‘conflict and most to keep him from learning about the
bargaining’ processes are important, where conflicts, contradictions, inconsistencies, gaps
different parts of an organisation are engaged in and uncertainties. The reason for this is
conflict over scarce resources, and collaboration simply that they are the ones responsible for
is about temporary advantage rather than long- making things rational, organised, scientific,
term commitment to shared goals, then their and legal’ (Douglas 1976: 91–2).
assessments of the success or failure of the
evaluation will be filtered through these Research into impact evaluation needs to learn
perspectives, and probably not willingly from research into other complex organisational
disclosed. phenomena and use a combination of methods,
with a particular emphasis on theory-informed
The advice from Douglas (1976) is appropriate case studies. The following sections outline some
here. He reminds us that it may be unwise to of the types of research methods that can be
analyse data as if all respondents are only trying used and issues to be addressed when choosing
to communicate their perfect knowledge of a and using them to study impact evaluation.
situation to the researcher. Our informants’
92 Rogers and Peersman Developing a Research Agenda for Impact Evaluation in Development
5.2 Literature reviews and systematic reviews organisations have been based on such low
Published materials can be a useful source of response rates that there is real concern about
evidence about impact evaluation but care is their suitability to provide a picture of the state
needed in both data collection and analysis. of practice. For example, the Innovation
Formal documents are most appropriate for Network’s study of evaluation practice and
providing evidence about formal processes, such capacity in the non-profit sector (Reed and
as guidance and systems. There is still a need to Moriaru 2010) reported over 1,000 survey
ensure that documents are representative. For responses but used a volunteer sample with a
example, Wildschut’s (2014) review of guidance response rate of 2.97 per cent and a profile of
on logic models and logframes involved an organisations very different to the target
exhaustive search for grey literature. population. Even where response rate is not a
problem, there remains the challenges of self-
In recent years, there have been a number of disclosure and self-awareness, as well as the
studies which have been labelled as a systematic question as to whether the person who completes
review of some aspects of evaluation practice but the survey is able to speak on behalf of the
which have been severely limited in their scope – organisation.
only including journal articles and books – but
then drawing conclusions about the state of 5.4 Conference presentations
evaluation practice. For example, Coryn et Conference presentations can be highly variable
al.(2011) claimed to have conducted a systematic as sources of evidence about evaluation practice
review of theory-driven evaluation, but, for and impacts. What is very often presented at
reasons not explained or justified, restricted the evaluation conferences and in evaluation
search to ‘traditional, mainstream scholarly journals are descriptions of one’s own practice
outlets including journals and books’, ‘excluding based on poor documentation and in an
other sources including doctoral dissertations, environment where there are significant
technical reports, background papers, white incentives to appear competent, minimise the
papers, and conference presentations and problems, and to make things look neater than
proceedings’ (p. 208). Journal articles represent the real messy process. There can be barriers to
a small and unrepresentative sample of disclosure at professional meetings where people
evaluation practice, as illustrated in Figure 1. are also seeking employment and engagement.
Develop or use appropriate measures What are adequate indicators of Review of indicators in use and the
and indicators important variables which cannot be understanding of the situation by
actually measured? those using the indicators; peer
review of those using the indicators;
prize for best indicator in terms of
utility and feasibility for a particular
challenging outcome
Develop programme theory How can a theory of change/logic Identification of examples in actual
model usefully represent complex evaluation reports, documentation
aspects (uncertainty, emergence)? of process to develop them and
usefulness; competition to develop
new types of logic models for
specific scenarios
How can an organisation support As above
projects to have locally specific
theories of change/programme
theory that are still coherent across
a programme, an organisation or a
sector?
Identify potential unintended results What are effective strategies for Trials of negative programme
identifying potential unintended theory3 methods with concurrent
results – and for negotiating their documentation of micro-detail of
inclusion in the scope of the facilitation
evaluation?
difficult for simulation studies to provide enough 1990) would be useful. An illustrative case study
context for people to enter into a realistic would be descriptive, providing in-depth
process – and people may respond differently examples. This could be very useful as a guide
when they actually have an important stake in for practice, or to develop a typology of practice.
the evaluation. Purposeful sampling of the case, including
selecting a typical case, would be appropriate. An
More recently, a randomised controlled trial exploratory case study is designed to generate
(RCT) was undertaken to test the effectiveness hypotheses for later investigation. Particularly
of different types of policy briefs (Beynon et al. successful or problematic evaluations might be a
2012). A large volunteer sample from various rich source of new ideas about barriers and
networks on policy and evidence was randomly enablers to good practice or impacts. A critical
assigned to receive one of three different instance case study might focus on a particular
versions of a policy brief, and their responses evaluation, or even a particular event or site or
were investigated through an online interaction within an evaluation which provides a
questionnaire, supplemented by telephone single instance of unique interest (for example,
interviews with a sub-sample of each group. documenting an innovation) or serves as a
critical test of an assertion (for example,
5.6 Systematic, rich case studies demonstrating that something is possible). A
Systematic case studies seem likely to provide programme implementation case study would
the most useful evidence about impact examine how evaluation is being implemented,
evaluation in terms of enabling environment, particularly in relation to internal or external
practice, products and impact – and how these standards. A programme effects case study would use
are linked. Different types of case studies (GAO systematic non-experimental causal inference
94 Rogers and Peersman Developing a Research Agenda for Impact Evaluation in Development
Table 4 Questions about the impact of particular impact evaluation methods
Identify appropriate measures and Does an emphasis on algorithmic Interviews with decision-makers
indicators interpretation of evidence-based policy using evidence in the form of a
(in the form of identifying ‘what works’ single metric to explore the level
in terms of a single metric and then of their attention to issues of
ranking alternatives in magnitude of heterogeneity and equity
effectiveness or cost-effectiveness)
lead to less consideration of equity
issues and the implementation of
interventions that reduce equity?
Collect or curate data What are the conditions when big Review of existing examples of big
data can provide useful information data use to develop a typology of
for impact evaluation? conditions; trials of using big data
on a specific challenge
Understand causal attribution and How can systematic non-experimental Identify, document and review
contribution strategies for causal inference be used existing examples; trial approaches
and communicated effectively? with input from research methods
specialists working with evaluators
Does a requirement for constructed Interviews with evaluation users
counterfactuals in impact evaluation
lead to less investment in system-level
interventions?
methods, such as process tracing or comparative The process of identifying good examples to
case study, to draw conclusions about what had document and analyse for case studies could
produced the observed practices, products and include the winning evaluations of awards and
impacts of the evaluations. prizes offered by various evaluation associations
which have been seen to have been of high
Documenting what is done and what happens can quality. Another possible research method for
use a mix of anthropological methods and cross- identifying and investigating successful cases
case comparisons. Despite the concerns about the would be positive deviance (Richard, Sternin and
limitations of self-reported practice, documented Sternin 2010), which involves identifying rare
practice will be an important source of knowledge. cases of success and investigating what they are
This would include reviewing existing reports of doing differently. What is particular is that the
practice and creating new documentation. This people involved in doing that enquiry are the
could proceed retrospectively – identifying good people who want to use that knowledge. This
practice that has happened and reconstructing might be evaluators, seeking to learn from other
what happened and why. It could involve evaluators who have conducted good impact
concurrent documentation – identifying particular evaluations, despite challenges, or it might be
challenges and following alongside different ways evaluation commissioners and users seeking to
of addressing them. It could involve documenting learn from other commissioners and users.
the processes used and the micro-interactions
within an evaluation team and with other Cases of low-quality impact evaluation, which
stakeholders. Research into the practices of highly could provide useful illustrations of problems
skilled evaluators and evaluation managers could and/or unsuccessful strategies, might be
develop examples and eventually typologies of identified through crowd sourcing. For example,
strategies used to effectively undertake each of an enquiry to the discussion list XCeval asking
the many tasks (previously outlined in the seven for examples of ‘ineffective evaluations’
clusters) involved in an impact evaluation. prompted 14 responses and candid discussion of
Choose what to evaluate What investments and activities are Review of formal records of impact
the subjects of evaluation? On the evaluations (where available); survey
basis of what evidence are decisions of evaluators
made about other investments and
activities?
What opportunities exist for funding Review of public interest research
public interest impact evaluation rather examples
than only funder-controlled evaluation?
Develop key evaluation questions What are effective processes to identify Detailed documentation and
potential key evaluation questions and analysis of meeting processes to
prioritise these to a feasible list? negotiate questions
Supporting use How can the utility of an evaluation be Identify, document and analyse
preserved when the primary intended existing examples
users leave before it is completed?
Reporting findings How can reports communicate clearly User testing of alternative reports
without oversimplifying the findings,
especially when there are important
differences in results?
Manage evaluation What procurement processes support Interview evaluation managers,
effective selection and engagement of evaluators, and contract managers
an evaluation team and effective about their processes, the impact of
management of the evaluation project? these and the rationale for them,
develop a typology of issues and
options
Evaluation capacity development Why does so much evaluation fail to be Interview with evaluation managers
informed by what is known about about their knowledge and sources
effective evaluation? of knowledge about evaluation
management
the problems in these evaluations (Luo and Liu the evaluation of school leadership; Cranston,
2014). For case studies of weak impact evaluations, Beynon and Rowley 2013 described an evaluation
de-identification might well be necessary. from the different perspectives of the evaluator
and the evaluation commissioner).
A writeshop process, either face-to-face or
virtual, can be one way to support retrospective 5.7 Trials of methods
documentation and development of detailed case Formal trials of new methods, designs or processes
studies. This involves a combination of writing by would be an important type of research to support.
one or more people associated with an evaluation This would require identifying either a promising
(often the evaluation team but in some cases the method and finding a potentially appropriate
commissioner as well) with structured editing situation to use it – or identifying a common
and peer review. Such writeshops can provide a challenge and finding a combination of methods
structure for the cases which examine and or processes to address it. This could take the
articulate aspects of their practice they had not form of a trial where skilled users of methods
previously thought of, and were certainly not apply them to a specific evaluation, with not only
reported in the methodology section of an documentation but also follow-up evaluation of
evaluation report (for example, Oakden 2013 the validity, utility, feasibility and propriety of
provided a detailed account of using rubrics in the evaluation (e.g. Rogers and McDonald 2001).
96 Rogers and Peersman Developing a Research Agenda for Impact Evaluation in Development
This research could be undertaken through a call research questions, grouped in terms of the
for proposals (where a specific trial is proposed), different aspects of an impact evaluation. While
through a matching process (where an they are all genuine research questions, which
evaluation site and a methodologist are paired could contribute to improving impact evaluation
up where supply and demand match), or through in and for development, they have also been
a competitive approach to research and chosen to illustrate different types of research
development where applicants produce proposals approaches that could be used.
which are competitively assessed – with the prize
including actual implementation of the plan. 7 Conclusion
The development of a formal research agenda
These trials could include longitudinal studies of will require a consultative process of identifying
the use and impact of evaluation, systematically those who might contribute or benefit in various
investigating the extent and nature of impact ways to identify needs, priorities and
from the findings and the processes of evaluation. opportunities. It needs sufficient resources. And,
it needs the right combination of creative
If one of the objectives of this research is trying abrasion and interdisciplinary cooperation.
to improve impact evaluation within a particular
government, this approach would involve The range of possible research questions is large.
working with them to identify an example of a The scope for fieldwork and subsequent uptake is
good impact evaluation, then to find out how also large. Researchers from a number of
they managed to achieve that and then, explore different disciplines will be needed to do this
whether their practices might be transferable. well. This interdisciplinary ‘creative abrasion’
This approach suggests a fundamental shift in can help to surface assumptions about evaluation
how the research would be done from researcher- which will add to the value of the research.
led to intended user-led.
Increasing efforts at international collaboration,
6 Examples of important research questions including special events around the International
about impact evaluation and how they might be Year of Evaluation in 2015, could provide both
answered resources, networking opportunities and impetus
To illustrate what these different ideas might to formalise the research agenda and proceed to
look like in practice, Tables 3–5 set out some fund its implementation over a number of years.
98 Rogers and Peersman Developing a Research Agenda for Impact Evaluation in Development
Ethnography, 2nd ed., Chicago and London: Wildschut, L.P. (2014) ‘Theory-Based Evaluation,
University of Chicago Press Logic Modelling and the Experience of South
White, H. (2009) Theory-based Impact Evaluation: African Non-Governmental Organisations’,
Principles and Practice, 3ie Working Paper 3, PhD dissertation, South Africa: Stellenbosch
www.3ieimpact.org/media/filer_public/2012/05/ University
07/Working_Paper_3.pdf (accessed 5 August
2014)