Adams 2020
Adams 2020
Adams 2020
Abstract
Health professionals deliver a range of health services to individuals and communities. The evaluation of these services is an
important component of these programs and health professionals should have the requisite knowledge, attributes, and skills to
evaluate the impact of the services they provide. However, health professionals are seldom adequately prepared by their training
or work experience to do this well. In this article we provide a suitable framework and guidance to enable health professionals to
appropriately undertake useful program evaluation. We introduce and discuss “Easy Evaluation” and provide guidelines for its
implementation. The framework presented distinguishes program evaluation from research and encourages health professionals
to apply an evaluative lens in order that value judgements about the merit, worth, and significance of programs can be made.
Examples from our evaluation practice are drawn on to illustrate how program evaluation can be used across the health care
spectrum.
Keywords
program evaluation, rubrics, health research, mixed methods
Date received: July 5, 2020. Received revised September 2, 2020; Accepted: September 11, 2020
Creative Commons Non Commercial CC BY-NC: This article is distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 License
(https://creativecommons.org/licenses/by-nc/4.0/) which permits non-commercial use, reproduction and distribution of the work without further permission
provided the original work is attributed as specified on the SAGE and Open Access pages (https://us.sagepub.com/en-us/nam/open-access-at-sage).
2 International Journal of Qualitative Methods
What is Program Evaluation? While there is no doubt the findings of this type of research
study are useful, the core purpose in this article is to present a
Providing a precise and universally accepted definition of
framework that allows claims to be made about a program’s
evaluation is difficult, in part because the discipline is
merit, worth, and significance. Such a framework requires spe-
extremely diverse (Gullickson et al., 2019). Patton (2018)
cific skills and training in program evaluation over and above
observed that professional evaluators “are an eclectic group
standard research skills.
working in diverse arenas using a variety of methods drawn
from a wide range of disciplines applied to a vast array of
efforts aimed at improving the lives of people in places Preparation to Undertake Program Evaluation
throughout the world” (p. 186). However, in spite of this While training in research practice and methods is common in
diversity, the foundational description of evaluation as the the undergraduate and graduate preparation of many health
systematic definition of the merit, worth, and significance professionals, exposure to program evaluation is much less
of a program, project or policy endures (Scriven, 1991). More prominent, and in some cases absent. A review of several nur-
recently Davidson (2014) stated that “evaluation, by defini- sing research-focused textbooks identified that minimal infor-
tion, answers evaluative questions, that is, questions about mation is provided about program evaluation compared with
quality and value. This is what makes evaluation so much other research techniques and skills. For example, only one of
more useful and relevant than the mere measurement of indi- the 29 chapters comprising the Nursing Research and Intro-
cators or summaries of observations and stories” (p. 1). Fur- duction textbook (Moule et al., 2017) focused on program eva-
ther, findings from program evaluations are characterized as luation, including two pages outlining generic steps in
being of particular use to inform decisions and identify conducting an evaluation. Similarly, in The Research Process
options for program improvement (Patton, 2014). in Nursing (Gerrish & Lathlean, 2015) one chapter of 40 (12 of
In contrast, research is typically described as producing new 605 pages) is about program evaluation. The information about
knowledge through systematic enquiry (Gerrish, 2015). program evaluation provided in most textbooks is broad and
Researchers are often broadly characterized as seeking to explores core concepts but does not provide sufficient infor-
understand and interpret meaning (qualitative research), or mation to guide the conduct of an evaluation. In addition, while
seeking to identify relationships between variables in order to research skills are often considered foundational in the educa-
explain or predict (quantitative research) (Braun & Clarke, tional curricula for health professionals, training in program
2013). These approaches can also be combined in multi meth- evaluation is much less prominent. Consequently, many health
ods and mixed methods projects (DePoy & Gitlin, 2016). professionals develop research skills and conduct research
Research is thus primarily “valued” for the contribution it projects, but evaluation skills and practice remain
makes to the body of knowledge (Levin-Rozalis, 2003; Patton, undeveloped.
2014). Undertaking program evaluation requires a solid foundation
In general health professionals are unaware of the distinc- in research knowledge and skills, as well as specific evaluation
tiveness of program evaluation as a discipline (Davidson, knowledge and skills. As Davidson (2007) noted, researchers
2005). As a result many reported “program evaluations” show may need to unlearn some of their research habits and develop
no evidence of the application of any evaluation-specific meth- new skills and ways of thinking: “Training in the applied social
odology, and accordingly no robust and clear determination of sciences provides a wonderful starting toolkit for a career in
the program’s quality or value is made. One example of this is evaluation, albeit one that needs topping up with several
an evaluation of a structural competency training program for essentials” (p. vi). The ability to utilize the perspective of eva-
medical residents that aimed to raise awareness about how luative reasoning is essential. This is a process through which
social, political, and economic structures impact on illness and evidence is collected (typically using standard research data
health of people (Neff et al., 2017). Among the qualitive results collection methods) and assessed as the basis for making
“residents reported that the training had a positive impact on well-reasoned evaluative conclusions about the program being
their clinical practice and relationships with patients. They also evaluated (Davidson, 2005, 2014).
reported feeling overwhelmed by increased recognition of As noted previously, much of the published information
structural influences on patient health, and indicated a need for available to those wishing to undertake program evaluation
further training and support to address these influences” (Neff remains generic, focusing on theory and concepts while lacking
et al., 2017, p. 430). While these results appear insightful, there the specificity required to guide the conduct of an evaluation.
are no evaluative conclusions provided that explicitly In the remainder of the article we provide a practical and pro-
addresses how good, valuable, or worthwhile the training was ven framework for program evaluation of health-related pro-
in terms of changing clinical practice or relationships. Using grams. If followed, the framework enables health professionals
criteria established by Davidson (2013), these presented results to plan and undertake program evaluation.
can most appropriately be considered “descriptive research The framework, branded “Easy Evaluation,” has been
facts” and not evaluation results as there is no explicit evalua- widely taught to the public health workforce in New Zealand
tive methodology used or process of evaluative reasoning since 2007 (Adams & Dickinson, 2010; Dickinson & Adams,
reported. 2012). We bring the perspectives of a program evaluation
Adams and Neville 3
Easy Evaluation
Easy Evaluation is a hybrid framework drawing on several
established evaluation approaches. The central theoretical
grounding is program theory-based evaluation. This approach
centers people’s understanding of what is required to develop
and implement a successful program (Mertens & Wilson,
2019). These understandings form the basis of the program
theory, which is typically shown in a logic model (Donaldson,
2007). Easy Evaluation also emphasizes the importance of the Figure 1. Easy evaluation framework.
valuing tradition in evaluation, which requires value judge-
ments about the merit and worth of a program to be made workshop sessions designed to enable stakeholders to exchange
(Davidson, 2005). The framework also incorporates a partici- ideas on program activity and the changes (outcomes) the pro-
patory dimension through the active involvement of key stake- gram is likely to achieve. The process of working together to
holders at all stages. develop a model promotes a shared understanding of the pro-
Easy Evaluation comprises six key phases (Figure 1). An gram, what it is trying to achieve and the rationale underpin-
explanation of these phases follows along with examples from ning it (Oosthuizen & Louw, 2013).
completed evaluation studies. Further detail about implement- Logic models can be drawn in many ways. One key type is
ing Easy Evaluation is available (Dickinson et al., 2015).1 an outcome chain model which is drawn in a way that repre-
sents the intervention and the consequences of its implemen-
Logic Models tation (Funnell & Rogers, 2011). Models drawn in this tradition
Logic models are a fundamental tool for evaluators using a demonstrate the causal pathway through a series of linked out-
theory-driven approach (Bauman & Nutbeam, 2014; Renger comes (short term outcome ! medium term outcome ! long
et al., 2019). The development of the model helps to set the term outcome) that result from the program’s implementation.
boundaries of the project, program, strategy, initiative, or pol- When developing a logic model it is crucial to identify the key
icy to be evaluated (sometimes generically referred to as the problem or overarching issue of concern. This problem or issue
evaluand) (Bamberger & Mabry, 2020; Davidson, 2005). can then be “reversed” and written in a positive way. This
Logic models represent the causal processes through which would typically be shown as one of the medium or long term
the program2 is expected to bring about change and produce outcomes on the model. Doing this ensures the central issue or
outcomes of interest (Donaldson, 2007; Hawe, 2015; Mills concern underpinning the program being evaluated remains
et al., 2019). Logic models provide an illustration of “an expli- prominent.
cit theory or model of how an intervention, such as a project, A logic model may have already been developed in the
programme, a strategy, an initiative, or a policy, contributes to planning phases of a project and existing models may need
a chain of intermediate results and finally to the intended or refining if they are not up to date. However, in many cases
observed outcomes” (Funnell & Rogers, 2011, p. xix). In other programs will not have a logic models and a model will need
words, the rationale or theory underpinning a program is pro- to be developed for the evaluation. In relation to the sexual
vided in the model, and the expected outcomes are identified health social marketing program we evaluated (Figure 2), the
prior to the program being implemented (Mertens & Wilson, logic model was newly developed for the evaluation.
2019). The overarching health issue of concern in this program was
Logic models represent stakeholders’ views of how and why the incidence of HIV infection among men who have sex with
a program will work. When developing the model, it is there- men (Adams et al., 2017). The desire to see a reduction in the
fore beneficial to include a wide range of stakeholder input. incidence of HIV was thus included as the long-term program
Stakeholders who have an interest are likely to include the outcome. While a logic model is read following the direction of
program implementers and funders, as well as participants in the arrows, typically the development of the logic model is
the program. Logic models are often developed in facilitated undertaken in the opposite direction. To achieve reduced
4 International Journal of Qualitative Methods
incidence of HIV, the project team identified the need for It is imperative to recognize that logic models need to be
increased condom use, and this was included as an outcome. considered within the context in which they are developed. In
Because increased condom use requires broad community this case the campaign was developed based on evidence that
commitment and support (Adams & Neville, 2009; Henderson promotional activities that engage with communities at all lev-
et al., 2009), the maintenance and development of such a con- els of development and implementation are more likely to be
dom culture was included as a medium term outcome. In order successful (Neville et al., 2014, 2016). The campaign was also
to achieve this outcome, it was determined the target audiences delivered at a time when condom use in New Zealand for anal
needed to understand the key message of using condoms and sex was relatively high compared to elsewhere and HIV diag-
further a tangible way for audiences to engage with the mes- noses among gay and bisexual men was relatively low by inter-
sages and program activities was required (these are expressed national standards (Saxton et al., 2011), and before the use of
as the immediate short-term outcomes). Finally, a social mar- biomedical HIV pre-exposure prophylaxis (PrEP) to prevent
keting program was seen as appropriate to enable these short HIV infection (Adams et al., 2019, 2020).
term outcomes to be achieved. In another example, the evaluators were tasked with exam-
The completed model can therefore be “read” as follows: If ining the initial 12-week intensive full-time course for building
a high quality social marketing program is developed and core competencies for sonography trainees when developing a
implemented, then the target audiences will understand the key logic model (Figure 3) (Dickinson et al., 2016). The aim of this
messages and they will engage with the social marketing initial training was to ensure trainees were “work ready,” thus
program. If the audiences understand and engage with the pro- reducing the burden on their supervisors when they returned to
gram, this will contribute to the maintenance and development the workplace and continued their studies. Trainees were thus
of a positive condom culture. If this condom culture is main- expected to be well-armed with essential skills, and well-
tained and developed there will be increased condom use for prepared in sonography fundamentals for the remainder of their
anal sex, which will lead to a reduced incidence of HIV. This postgraduate course.
model represents the program team’s understanding of how the Logic models are not static entities. They represent thinking
program would work. The evaluation of this program was at one point in time and can be regenerated and redrawn as
planned to test this rationale by assessing the program’s devel- needed throughout the implementation of the project and/or the
opment and implementation, and to what extent the outcomes course of an evaluation as familiarity with the program devel-
were achieved. In that regard, the program evaluation exam- ops (Patton, 2011). A robust logic model will be plausible and
ined the quality of the intervention and tested the program’s sensible and clearly communicate the causal processes that lead
rationale/theory. to the identified outcomes (Donaldson & Lipsey, 2011; Funnell
In making decisions about program elements to be repre- & Rogers, 2011).
sented in the model, the team drew on relevant theory and their
own experiences to determine the causal links between the
elements. The key theory drawn on is that consistent (and Evaluation Priorities and Questions
correct) use of condoms for anal sex by gay and bisexual men Using the Easy Evaluation framework focuses the program
can lead to a reduced incidence of HIV (Shernoff, 2005; Sulli- evaluation on the identified interventions and the expected out-
van et al., 2012). At other points when developing the model, comes. In other words, each box on the logic model can be
the program team drew on their knowledge of the importance evaluated. For each intervention depicted in the logic model,
of peer support to encourage condom use (McKechnie et al., the key question is: What is the quality of the intervention? In
2013; Seibt et al., 1995), and the effectiveness of social mar- turn for each outcome, the key question is: To what extent has
keting approaches based on direct community engagement for the outcome (of interest) been achieved? The benefit of this
reducing barriers to participation in the desired activity approach is identifying a specific focus for the program evalua-
(McKenzie-Mohr, 2000; Neville & Adams, 2009; Neville tion. The aim of the evaluation is to provide direct and succinct
et al., 2014). answers to these intervention and outcome questions.
Adams and Neville 5
When determining the evaluation priorities, decisions need these criteria, the aspects of an intervention or an outcome that
to be made about which interventions and outcomes will be are important within the evaluation are identified. These cri-
evaluated. In the cases of Get it On! and sonography training it teria and standards represent the values that will be used to
was possible to evaluate all components of the models. How- determine program performance (Gullickson & Hannum,
ever, in many instances it is neither practical nor useful to 2019; Peersman, 2014). The foregrounding of these values is
evaluate all the interventions and outcomes on a logic model, something not undertaken in research, highlighting a key dif-
and only the most important interventions and outcomes of the ference between research and evaluation practice.
program need to be prioritized for evaluation. These prioritiz- At their broadest level criteria represent concepts to be
ing decisions must be made to meet stakeholder needs within addressed in the evaluation (Peersman, 2014). To ensure their
the resources and budget available for the evaluation. In gen- usefulness the development of specific criteria is recommended
eral terms, the short and medium term outcomes of a program to make the evaluation targeted and specific (Dickinson &
are expected to be more fully achieved than long term out- Adams, 2017). In the Easy Evaluation framework, criteria
comes and it typically makes sense to prioritize them. In a reflect stakeholder views about the important dimensions of
theory-driven evaluation it is essential to understand whether an intervention or an outcome. Most interventions and out-
the short and medium term outcomes are being achieved, as the comes require the development of several criteria to ensure a
logic modeling exercise will have identified these outcomes as well-rounded understanding of the intervention or outcome.
theoretically necessary to produce the desired long term out- Once established, performance standards for these criteria
comes. If these short and medium outcomes are not achieved to need to be developed. A standard refers to levels of perfor-
a sufficient degree it would be unlikely the program would mance in relation to the quality of the intervention and degree
achieve its long term outcome(s). to which an outcome has been achieved. Rather than providing
A number of mechanisms are available to assist in the prior- a singular standard, such as the minimum level of achievement
itization process to ensure evaluation decisions are informed by acceptable, we advocate the use of rubrics setting out a range of
the interests and needs of the stakeholders (Dickinson et al., performance standards. Rubrics can be considered a rating
2015). Typically, this will involve a formal facilitated discus- table or matrix that provides scaled levels of achievement.
sion with stakeholders. A useful process for setting priorities is They set out an agreed understanding and provide a transparent
to have stakeholders vote on what they see as the most impor- basis for making evaluative judgements about aspects of a
tant elements of the logic model. One technique is to give program (King et al., 2013). Through the process of developing
stakeholders “sticky dots” to place on the logic model to rep- rubrics stakeholders make it clear what is valued about a
resent their interests and priorities. After this process discus- program.
sion will be held between the evaluators and stakeholders to An evaluation rubric comprises two key components—the
develop an agreement as to what will be evaluated. This type of criteria to be rated and performance standards for the criteria.
process will enhance alignment between the stakeholders’ Depending on the scope of the project and stakeholder needs
views, interests and expectations of the evaluation process. there can be any number of categories of merit or levels of
standards. Different labels can be employed to accompany the
description of performance (e.g., Excellent, Very good, Good,
Evaluation Criteria and Performance Standards Poor; or Highly effective, Minimally effective, Not effective).
Each prioritized intervention and outcome requires the estab- Alternatively, a numbered rating scale can be used to depict
lishment of criteria and performance standards. In developing various levels of performance (e.g., 1–5). Six levels of
6 International Journal of Qualitative Methods
Table 1. Rubric for Outcome: Target Audiences Understand Get it On! Key Message.
Excellent Performance is clearly very strong or exemplary in relation to understanding the key message of Get it On!. Almost all (>90%)
of men:
identify the key message of Get it On!
identify condom use as an important issue for gay and bisexual men
report message as clear
report message as instantly recognizable
Very good Performance is generally strong in relation to understanding the key message of Get it On!. A vast majority (>75%) of men:
identify the key message of Get it On!
identify condom use as an important issue for gay and bisexual men
report message as clear
report message as instantly recognizable
Good Performance is acceptable in relation understanding the key message of Get it On!. Most (>65%) of men:
identify the key message of Get it On!
identify condom use as an important issue for gay and bisexual men
report message as clear
report message as instantly recognizable
Poor Performance in relation to understanding the key message of Get it On! is unacceptably weak. Does not meet minimum
expectations/requirements.
Table 2. Rubric for Intervention: Get it On! Reflects Best Practice in Social and Behavior Change Marketing.
Excellent A clear example of very strong or exemplary performance in relation to best practice in social and behavioral change marketing.
Any gaps or weaknesses are not significant and are managed effectively.
Very good Strong performance in relation to best practice in social and behavioral change marketing. No significant gaps or weaknesses, and
less significant gaps or weaknesses are mostly managed effectively.
Good Acceptable or fair performance in relation to best practice in social and behavioral change marketing. Some gaps or weaknesses.
Meets minimum expectations/requirements.
Poor Unacceptably weak performance in relation to best practice in social and behavioral change marketing. Does not meet minimum
expectations/requirements.
standards should be the maximum as extra precision is not sub-questions were developed. One of these was: How well
typically gained from having more levels (Davidson, 2005). does Get it On! reflect best practice in social and behavior
In the Get it On! evaluation, a workshop was held with key change marketing? A rubric stating the performance standards
stakeholders to develop criteria and standards. The discussion for this sub-question was also developed collaboratively (Table
drew on stakeholders’ expertise along with evidence and prac- 2).
tice from other similar evaluations and the views and advice Rubrics can vary in their level of detail and preciseness. The
provided by topic area experts. Four criteria were developed for rubrics suggested as a starting point within the Easy Evaluation
the outcome—Target audiences understand Get it On! key framework are reasonably prescriptive. Specific standards
message: (e.g., Almost all (>90%) of men identify the key message of
Get it On!, see Table 1) reduce ambiguity when determining
participants identify the key message of Get it On! performance. More holistic rubrics can be developed as the
participants identify condom use as an important issue experience and confidence of the evaluator grows. Holistic
for gay and bisexual men rubrics require more interpretation by the evaluator and stake-
participants report message as clear holders to determine how well a program has performed. An
participants report message as instantly recognizable example of a holistic rubric is a generic rubric developed by
In addition, four levels of performance incorporating these Davidson (2005) for use with qualitative data (see Table 3).
criteria were identified (Table 1) (Adams & Neville, 2013; This generic rubric can be customized for a range of evaluation
Adams et al., 2017). Developing four criteria meant the evalua- projects.
tion was able to take into account four different views or
aspects to determine performance in relation to the outcome.
Having several views or datapoints is much stronger than rely-
Collect, Analyze and Interpret Data
ing on a single indicator. Data collection should be driven by the needs of the evaluation
The Get it On! evaluation also incorporated a significant (Bauman & Nutbeam, 2014). In the Easy Evaluation frame-
qualitative component exploring the planning and design of the work, the criteria and standards determine the areas for data
program. To assess the quality of the intervention, evaluation collection. For each area of data collection, specific methods
Adams and Neville 7
Table 3. Generic Rubric For Qualitative Data (Adapted With Minor Variation From Davidson, 2005).
Excellent Evidence of a strong positive impact: Very positive comments, with a substantial number that indicated a very strong impact; few if any
neutral or negative comments.
Good Evidence of a noticeable positive impact: A good number of positive comments (few neutral or negative), clearly showing that the
program had made a noticeable positive impact.
Satisfactory Evidence of some positive impact: A mix of positive and negative comments, skewed toward the positive; evidence pointing in the
right direction but not to a very noticeable impact.
Marginal Little or no impact either way: A real mix of comments; not a clear skew in either the positive or negative direction.
Poor Evidence of some negative impact: A mix of positive and negative comments, skewed somewhat toward the negative; not enough
evidence to call this a really noticeable negative impact.
Table 4. Summary of Evaluators Assessment of Interview and Other Data for Intervention: Get it On! Reflects Best Practice in Social and
Behavior Change Marketing.
Behavioral outcomes: The behavior of interest is clearly identified and the program is focused on influencing these. The program
draws on pragmatic social marketing principles rather than being informed by behavioral theories. A
social approach is applied and is relevant in HIV prevention where the role of social networks in disease
transmission is increasingly recognized as important. The Get it On! program clearly demonstrates an
understanding of the need for social change to foster environments that encourage individual behavior
change.
Target market identification: The target market is clearly identified and appropriate audience segments established.
Program objectives: The program does not have measurable and time-bound goals established for any of the behavioral
outcomes of interest. The NZAF have suggested that establishing a goal for rates of condom use would
be difficult and perhaps arbitrary. Nonetheless, as this behavior is crucial to the ultimate outcome of
reducing HIV incidence, setting goals for the behavioral outcome of condom use should be explored
further by the NZAF, Ministry of Health and other stakeholders. The establishment of goals for other
outcomes on the logic model would also be desirable.
Tactical initiatives to achieve The tactical initiatives appear appropriate for the audience segments and outcomes of interest.
program objectives
and tools for gathering data need to be developed. Each criter- standard research project. The use of surveys, key informant
ion requires at least one data collection method. Where feasible interviews, and focus groups is common in program evaluation.
the use of qualitative and quantitative methods is recommended Where possible existing data already collected by the program
as more likely to support a comprehensive understanding of the or by others should be reviewed to establish whether it is suit-
intervention or outcome being examined. able and relevant to use. The focus of all data collection centers
As an example, the Get it On! evaluation utilized an online on providing relevant data for the evaluation. After analysis the
and paper-based survey to seek the views of gay and bisexual data are used in the process of drawing evaluative conclusions.
men about the program and their sexual behavior and practices.
The survey was largely quantitative, but qualitative data were
also collected using a series of open ended questions (Neville Draw Evaluative Conclusions
et al., 2016). For the criteria Knowledge of key message of Get In this phase the analyzed data (or the descriptive research
it On!, survey participants were asked: “What does the Get it “facts”) are viewed through a process of evaluative reasoning
On! message mean to you? (Please select one option closest to so that evaluative conclusions can be developed. Drawing con-
your understanding of what it is promoting or associated with): clusions is the process by which the values of the project are
Sports participation; Unsafe sex; Use of condoms for sex; I’ve made explicit in determining program performance. This cen-
seen it, but don’t know what it means; I’ve never seen or heard tering of values is an important feature of program evaluation
of Get it On!; Other (please specify).” and sets it apart from research.
Qualitative key informant interviews (along with a review A key step in formulating evaluative conclusions is holding
of documents) were also used in the Get it On! evaluation. The a sensemaking session or “data party.” This is a participatory
data collected from the interviews were used to assess the process to involve stakeholders and evaluators in interpreting
evaluation sub-question: How well Get it On! reflected best findings and establishing common understandings (Patriotta &
practice in social and behavior change marketing” (for a sum- Brown, 2011). Integral to this process is the “mapping” of
mary of the qualitative and document review findings see Table shared understandings of the data to the rubric to determine
4). and agree the level of performance.
In general terms, the data collection methods and analysis In the Get it On! program evaluation data for the outcome
used in program evaluation are the same as those used in a Target audiences understand Get it On! key message were
8 International Journal of Qualitative Methods
Table 5. Summary of Data for Outcome: Target Audiences Understand Get it On! key Message.
Knowledge of key All respondents were asked to identify from a range of options “what does the Get it On! message mean to you?”
message of Get it On! Nearly all men (94.6%) correctly reported the key message as “use of condoms for anal sex.” A very small
proportion (1.6%) of men reported they had never heard of Get it On!. Among the priority audience segments,
reported knowledge of the correct key message of “use of condoms for anal sex” was similar (89.8–95.2%).
Condom use is an All respondents were asked to state their level of agreement with the statement, “Using a condom for sex is
important issue for gay important for gay and bisexual men.” Most men (91.7%) reported they strongly agreed or agreed with the
and bisexual men statement, 4.1% were neutral, and 3.4% disagreed or strongly disagreed. Across all but one of the audience
segments, a reasonably similar proportion (88.8%–92.8%) of men strongly agreed or agreed condom use is an
important issue for gay and bisexual men. The exception was men who had 21 or more partners within the last 6
months, where a statistically significant smaller proportion (75.7%) reported their agreement that condom use is
important for gay and bisexual men.
Key message is clear To assess respondents’ views on whether the Get it On! message is clear, they were presented with the following
negative statement and asked to state their level of agreement with it: “Get it On! is confusing.” Around three
quarters of men (76.3%) reported they strongly disagreed or disagreed with the statement, 12.4% were neutral,
and 11.3% agreed or strongly agreed. Across all but one of the audience segments, a similar proportion (75.6%–
77.5%) of men disagreed that the message Get it On! is confusing. The exception was men who had 21 or more
partners within the last 6 months, where a significantly lower proportion (56.8%) reported disagreement that the
message of Get it On! is confusing.
Key message is instantly All respondents were asked to state their level of agreement with the statement “Get it On! is instantly
recognizable recognizable.” Over three quarters of men (79.8%) reported they strongly agreed or agreed with the statement,
13.9% were neutral, and 6.3% disagreed or strongly disagreed. Across audience segments a reasonably similar
proportion (78.4%–84.8%) of men reported they strongly agreed or agreed the Get it On! message is instantly
recognizable. The exceptions were men who had their first anal sex within the last 6 months (71.6%), where a
statistically significant lower proportion reported agreement that the Get it On! message is instantly
recognizable, and among men who had 21 or more partners within the last 6 months (not significant).
collected via a survey and summarized (Table 5). These data Behavioral outcomes—Excellent
were then mapped to the rubric (Table 1). An assessment was Target market identification—Excellent
determined for each criterion: Program objectives—Good
report at a meeting, various media (print, radio, Facebook, web- the simpler projects in which health professionals starting out
page), or an exhibition or display in a public place. Where pos- in program evaluation are most likely to be involved.
sible wider dissemination to academic and practice audiences
should also be undertaken through journal articles (e.g., Adams
et al., 2017; Wilkinson et al., 2014) and conference presentations. Conclusion
Providing a way for health professionals to undertake program
evaluation will allow for greater understandings to be devel-
Discussion oped about the merit, worth, and significance of clinical and
non-clinical health initiatives. In our view health professionals
Health professionals are competent clinicians who deliver have valuable expertise which can enhance program evaluation
health services to communities. Integral to the successful deliv- activity. Easy Evaluation is a framework that will allow health
ery of health services is the knowledge, attributes and skills for professionals to conduct robust program evaluation.
evaluating their impact. Consequently, the ability to undertake
program evaluation is an important addition to the skill set of Acknowledgments
all health professionals. While there has been an expectation
We acknowledge colleagues (Dr Pauline Dickinson, Dr Lanuola Asia-
for some time that health professionals can plan and implement
siga, Dr Belinda Borell) who were involved in the development of the
evaluations of small scale projects (Stevenson et al., 2002), Easy Evaluation framework and those who led or contributed to the
training and support for this group has been lacking. evaluation projects referenced in this article.
In this article we have presented a framework to guide
health professionals in conducting program evaluation. In Declaration of Conflicting Interests
doing so, we have highlighted that research and evaluation are
The author(s) declared no potential conflicts of interest with respect to
not the same, and it is evaluative reasoning that sets these
the research, authorship, and/or publication of this article.
endeavors apart. We are not suggesting that following this
framework will prepare health professionals to be “professional
Funding
evaluators.” The framework can however inform “non-
evaluation professionals” about how to undertake useful eva- The author(s) disclosed receipt of the following financial support for
luation activity (Gullickson et al., 2019) and contribute to the research, authorship, and/or publication of this article: The devel-
opment of Easy Evaluation and the teaching of it to the public health
building evaluation capacity and capability among health pro-
workforce is funded by the Ministry of Health, New Zealand. The
fessionals (King & Ayoo, 2020). Small projects including views expressed here are those of the authors and not the Ministry
student theses or applying tools such as logic models may be of Health.
an appropriate way to initially utilize the framework. With
increased confidence larger projects may be possible, as well ORCID iD
as more informed involvement with external evaluators when
Jeffery Adams https://orcid.org/0000-0003-3052-5249
this is relevant.
Stephen Neville https://orcid.org/0000-0002-1699-6143
The strength of the Easy Evaluation framework is that it
offers health professionals a way to embed evaluative thinking
into program evaluation through its processes. Ensuring an Notes
evaluation lens is applied sets program evaluation apart from 1. Copies of the framework are available from the first author.
research projects that are evaluation in name only and demon- 2. For simplicity in this article the term “program” is used generically
strate no evidence of evaluative thinking, nor have robust sys- to describe the evaluand (i.e. the project, program, strategy, initia-
tems in place to make value judgements about the merit, worth, tive, or policy to be evaluated).
and significance of programs. Using an evaluation-specific
framing allows evaluators to move beyond just reporting data References
to telling a meaningful “story” about a program (Hauk & Adams, J., Coquilla, R., Montayre, J., Manalastas, E. J., & Neville, S.
Kaser, 2020). (2020). Views about HIV and sexual health among gay and bisex-
Some limitations should be noted. This framework is one ual Filipino men living in New Zealand. International Journal of
way to approach program evaluation, but there are a plethora of Health Promotion and Education. https://doi.org/10.1080/1463
alternative approaches to program evaluation (for a survey of 5240.2020.1766993
approaches see Mertens & Wilson, 2019). However Easy Eva- Adams, J., Coquilla, R., Montayre, J., & Neville, S. (2019). Knowl-
luation has been successfully taught to and used by health edge of HIV pre-exposure prophylaxis among immigrant Asian
professionals, and has proved suitable in many circumstances. gay men living in New Zealand. Journal of Primary Health Care,
Further, evaluation approaches using logic models have been 11(4), 351–358. https://doi.org/10.1071/HC19076
critiqued on the grounds they are not always suitable for the Adams, J., & Dickinson, P. (2010). Evaluation training to build capa-
evaluation of complex programs (Brocklehurst et al., 2019; bility in the community and public health workforce. American
Renger et al., 2019). While this criticism is valid in particular Journal of Evaluation, 33(3), 421–433. https://doi.org/10.1177/
circumstances, the approach offered here is entirely suitable for 1098214010366586
10 International Journal of Qualitative Methods
Adams, J., & Neville, S. (2009). Men who have sex with men account Donaldson, S. I. (2007). Program theory-driven evaluation science:
for nonuse of condoms. Qualitative Health Research, 19(12), Strategies and applications. Lawrence Erlbaum Associates.
1669–1677. https://doi.org/10.1177/1049732309353046 Donaldson, S. I., & Lipsey, M. W. (2011). Roles for theory in con-
Adams, J., & Neville, S. (2013). An evaluation of get it on! SHORE & temporary evaluation practice: Developing practical knowledge. In
Whariki Research Centre, Massey University. I. F. Shaw, J. C. Greene, & M. M. Mark (Eds.), The SAGE hand-
Adams, J., Neville, S., & Dickinson, P. (2013). Evaluation of bro book of evaluation (pp. 57–75). Sage.
online: An internet-based HIV prevention initiative for gay and Evergreen, S. (2016). Effective data visualization: The right chart for
bisexual men. International Journal of Health Promotion and Edu- the right data (2nd ed.). Sage.
cation, 51(5), 239–247. https://doi.org/10.1080/14635240.2012. Frye, A. W., & Hemmer, P. A. (2012). Program evaluation models and
702502 related theories: AMEE Guide No. 67. Medical Teacher, 34(5),
Adams, J., Neville, S., Parker, K., & Huckle, T. (2017). Influencing e288–e299. https://doi.org/10.3109/0142159X.2012.668637
condom use by gay and bisexual men for anal sex through social Funnell, S. C., & Rogers, P. J. (2011). Purposeful program theory:
marketing. Social Marketing Quarterly, 23(1), 3–17. https://doi Effective use of theories of change and logic models. Jossey-Bass.
.org/10.1177/1524500416654897 Gerrish, K. (2015). Research and nursing development. In K. Gerrish
Azzam, T., Evergreen, S., Germuth, A. A., & Kistler, S. J. (2013). & J. Lathlean (Eds.), The research process in nursing (pp. 3–13).
Data visualization and evaluation. New Directions for Evaluation, John Wiley & Sons.
2013(139), 7–32. https://doi.org/10.1002/ev.20065 Gerrish, K., & Lathlean, J. (2015). The research process in nursing.
Bamberger, M., & Mabry, L. (2020). RealWorld evaluation: Working John Wiley & Sons. http://ebookcentral.proquest.com/lib/massey/
under budget, time, data, and political constraints (3rd ed.). Sage. detail.action?docID¼1936761
Bauman, A., & Nutbeam, D. (2014). Evaluation in a nutshell: A guide Gullickson, A. M., & Hannum, K. M. (2019). Making values explicit
to the evaluation of health promotion programs (2nd ed.). McGraw in evaluation practice. Evaluation Journal of Australasia, 19(4),
Hill Education. 162–178. https://doi.org/10.1177/1035719x19893892
Braun, V., & Clarke, V. (2013). Successful qualitative research: A Gullickson, A. M., King, J. A., LaVelle, J. M., & Clinton, J. M. (2019).
practical guide for beginners. Sage. The current state of evaluator education: A situation analysis and
Brocklehurst, P. R., Baker, S. R., Listl, S., Peres, M. A., Tsakos, G., & call to action. Evaluation and Program Planning, 75, 20–30.
Rycroft-Malone, J. (2019). How should we evaluate and use evi- https://doi.org/10.1016/j.evalprogplan.2019.02.012
dence to improve population oral health? Dental Clinics, 63(1), Hauk, S., & Kaser, J. (2020). A search to capture and report on
145–156. https://doi.org/10.1016/j.cden.2018.08.009 feasibility of implementation. American Journal of Evaluation,
Curry, L. A., Nembhard, I. M., & Bradley, E. H. (2009). Qualitative 41(1), 145–155. https://doi.org/10.1177/1098214019878784
and mixed methods provide unique contributions to outcomes Hawe, P. (2015). Lessons from complex interventions to improve
research. Circulation, 119(10), 1442–1452. https://doi.org/doi:10 health. Annual Review of Public Health, 36(1), 307–323. https://
.1161/CIRCULATIONAHA.107.742775 doi.org/10.1146/annurev-publhealth-031912-114421
Davidson, E. J. (2005). Evaluation methodology basics: The nuts and Henderson, K., Worth, H., Aggleton, P., & Kippax, S. (2009). Enhan-
bolts of sound evaluation. Sage. cing HIV prevention requires addressing the complex relationship
Davidson, E. J. (2007). Unlearning some of our social scientist habits. between prevention and treatment. Global Public Health: An Inter-
Journal of MultiDisciplinary Evaluation, 4(8), iii–vi. national Journal for Research, Policy and Practice, 4(2), 117–130.
Davidson, E. J. (2013). Actionable evaluation basics: Getting succinct http://www.informaworld.com/10.1080/17441690802191329
answers to the most important questions. Real Evaluation. Hutchinson, K. (2017). A short primer on innovative evaluation
Davidson, E. J. (2014). Evaluative reasoning. Methodological briefs: reporting. Kylie Hutchinson.
Impact evaluation 4. UNICEF Office of Research. Kemppainen, V., Tossavainen, K., & Turunen, H. (2012). Nurses’
DePoy, E., & Gitlin, L. N. (2016). Introduction to research: Under- roles in health promotion practice: An integrative review. Health
standing and applying multiple strategies (5th ed.). Mosby. Promotion International, 28(4), 490–501. https://doi.org/10.1093/
Dickinson, P., & Adams, J. (2012). Building evaluation capability in heapro/das034
the public health workforce: Are evaluation training workshops King, J. A., & Ayoo, S. (2020). What do we know about evaluator
effective and what else is needed? Evaluation Journal of Austra- education? A review of peer-reviewed publications (1978–2018).
lasia, 12(2), 28–39. https://doi.org/10.1177/1035719X1201200204 Evaluation and Program Planning, 79. https://doi.org/10.1016/j
Dickinson, P., & Adams, J. (2017). Values in evaluation—The use of .evalprogplan.2020.101785
rubrics. Evaluation and Program Planning, 65, 113–116. https:// King, J., McKegg, K., Oakden, J., & Wehipeihana, N. (2013).
doi.org/10.1016/j.evalprogplan.2017.07.005 Evaluative rubrics: A method for surfacing values and improving
Dickinson, P., Adams, J., Asiasiga, L., & Borell, B. (2015). Easy the credibility of evaluation. Journal of MultiDisciplinary Evalua-
Evaluation: A practical approach to programme evaluation. tion, 9(21), 11–20.
SHORE & Whariki Research Centre, Massey University. Levin-Rozalis, M. (2003). Evaluation and research: Differences and
Dickinson, P., Adams, J., & Neville, S. (2016). The final evaluation of similarities. Canadian Journal of Program Evaluation, 18(2),
the Northern Regional Accelerated Sonography Training pilot 1–31.
2014 to 2016. SHORE & Whariki Research Centre, Massey McKechnie, M. L., Bavinton, B. R., & Zablotska, I. B. (2013). Under-
University. standing of norms regarding sexual practices among gay men:
Adams and Neville 11
Literature review. AIDS and Behavior, 17(4), 1245–1254. https:// Peersman, G. (2014). Evaluative criteria: Methodological briefs:
doi.org/10.1007/s10461-012-0309-8 Impact evaluation 3. UNICEF Office of Research.
McKenzie-Mohr, D. (2000). Fostering sustainable behavior through Renger, R., Atkinson, L., Renger, J., Renger, J., & Hart, G. (2019).
community-based social marketing. American Psychologist, 55(5), The connection between logic models and systems thinking con-
531–537. https://doi.org/10.1037/0003-066X.55.5.531 cepts. Evaluation Journal of Australasia, 19(2), 79–87. https://doi.
Mertens, D. M., & Wilson, A. T. (2019). Program evaluation theory org/10.1177/1035719x19853660
and practice: A comprehensive guide (2nd ed.). Guilford Press. Saxton, P., Dickson, N., McAllister, S., Sharples, K., & Hughes, A.
Mills, T., Lawton, R., & Sheard, L. (2019). Advancing complexity (2011). Increase in HIV diagnoses among men who have sex with
science in healthcare research: The logic of logic models. BMC men in New Zealand from a stable low period. Sexual Health, 8(3),
Medical Research Methodology, 19(1), 55. https://doi.org/10.1186/ 311–318. https://doi.org/10.1071/SH10087
s12874-019-0701-4 Scriven, M. (1991). Evaluation thesaurus. Sage.
Moule, P., Aveyard, H., & Goodman, M. (2017). Nursing research: Seibt, A. C., Ross, M. W., Freeman, A., Krepcho, M., Hedrich, A.,
An introduction. Sage. McAlister, A., & Ferna’Ndez-Esquer, M. E. (1995). Relationship
Neff, J., Knight, K. R., Satterwhite, S., Nelson, N., Matthews, J., & between safe sex and acculturation into the gay subculture. AIDS
Holmes, S. M. (2017). Teaching structure: A qualitative evaluation Care, 7(suppl 1), 85–88. https://doi.org/10.1080/095401295
of a structural competency training for resident physicians. Journal 50126876
of General Internal Medicine, 32(4), 430–433. https://doi.org/10 Shernoff, M. (2005). Without condoms: Unprotected sex, gay men and
.1007/s11606-016-3924-7 barebacking. Routledge.
Neville, S., & Adams, J. (2009). Condom use in men who have sex Stevenson, J. F., Florin, P., Mills, D. S., & Andrade, M. (2002). Build-
with men: A literature review. Contemporary Nurse, 33(2), ing evaluation capacity in human service organizations: A case
130–139. https://doi.org/10.5172/conu.2009.33.2.130
study. Evaluation and Program Planning, 25(3), 233–243.
Neville, S., Adams, J., & Holdershaw, J. (2014). Social marketing
https://doi.org/10.1016/S0149-7189(02)00018-6
campaigns that promote condom use among MSM: A literature
Sullivan, P. S., Carballo-Diéguez, A., Coates, T., Goodreau, S. M.,
review. Nursing Praxis in New Zealand, 30(1), 5–16.
McGowan, I., Sanders, E. J., Smith, A., Goswami, P., & Sanchez,
Neville, S., Adams, J., Moorley, C., & Jackson, D. (2016). The con-
J. (2012). Successes and challenges of HIV prevention in men who
dom imperative in anal sex—One size may not fit all: A qualitative
have sex with men. The Lancet, 380(9839), 388–399. https://doi.
descriptive study of men who have sex with men. Journal of Clin-
org/10.1016/S0140-6736(12)60955-6
ical Nursing, 25(23–24), 3589–3596. https://doi.org/10.1111/jocn.
Taylor-Powell, E., & Boyd, H. H. (2008). Evaluation capacity build-
13507
ing in complex organizations. New Directions for Evaluation,
Neville, S., Adams, J., Napier, S., & Shannon, K. (2018). Age-friendly
2008(120), 55–69. https://doi.org/10.1002/ev.276
community evaluation: Report prepared for the Office for Seniors,
Trombetta, C., Capdeville, M., Patel, P. A., Feinman, J. W., Al-
Ministry of Social Development. AUT Centre for Active Ageing,
Ghofaily, L., Gordon, E. K., & Augoustides, J. G. T. (2020).
Auckland University of Technology.
Oosthuizen, C., & Louw, J. (2013). Developing program theory for The Program evaluation committee in the adult cardiothoracic
purveyor programs. Implementation Science, 8(1), 23. https://doi anesthesiology fellowship—Harnessing opportunities for pro-
.org/10.1186/1748-5908-8-23 gram improvement. Journal of Cardiothoracic and Vascular
Patriotta, G., & Brown, A. D. (2011). Sensemaking, metaphors and Anesthesia, 34(3), 797–804. https://doi.org/10.1053/j.jvca.
performance evaluation. Scandinavian Journal of Management, 2019.08.011
27(1), 34–43. https://doi.org/10.1016/j.scaman.2010.12.002 Wilkinson, J., Carryer, J., & Adams, J. (2014). Evaluation of a
Patton, M. Q. (2011). Developmental evaluation: Applying complexity diabetes nurse specialist prescribing project. Journal of Clinical
concepts to enhance innovation and use. Guilford Press. Nursing, 23(15–16), 2355–2366. https://doi.org/10.1111/jocn.
Patton, M. Q. (2014). Evaluation flash cards: Embedding evaluative 12517
thinking in organizational culture. Otto Bremer Trust. Wilkinson, J., Carryer, J., Adams, J., & Chaning-Pearce, S. (2011).
Patton, M. Q. (2018). Evaluation science. American Journal of Evalua- Evaluation of the diabetes nurse specialist prescribing project.
tion, 39(2), 183–200. https://doi.org/10.1177/1098214018763121 School of Health and Social Services, Massey University.