Green 1974
Green 1974
Green 1974
34
Downloaded from heb.sagepub.com at The University of Melbourne Libraries on June 4, 2016
PREDISPOSING
FACTORS
Knowledge
Attitudes
Values
r-.- 7
Perceptions 1
---- - -- !
Non-health
Communication
L Y V L C I & .“I.
Behavioral
COMPONFNTq
OFHEAL , T
PROGRAM
Referrals Behavioral
Indicators Vital Indicators Social Indicators
Utilization Morbidity I I I igit imacy
Staff Preventive actions Mortality Population
Development REINFORCING Welfare
Fer ti1 it y
Disability Unemployment
IL G Absenteeism
Alienation
Feedback Health Hostility
Dimensions Dimensions
Personnel I Discrimination
Earliness Incidence Votes
37
Downloaded from heb.sagepub.com at The University of Melbourne Libraries on June 4, 2016
intensity, duration, frequency, time of onset, and other diagnostic
dimensions.
Measures available or capable of being produced in relation to
behuvioml problems include utilization and compliance indices from
medical records, clinic o r hospital billing records, direct observation,
and surveys in which people are asked about their own behavior or the
behavior of others whom they have observed. Dimensions of behavioral
measurement that are relevant to health include earliness, frequency,
range, persistence, and quality.'
Each of the enabling, predisposing and reinforcing factors con-
tributing to the behavioral problem has various measurement
possibilities for comparison purposes. Measures of availability and
accessibility, for example, may be obtained through community agency
surveys, inventories of available resources, geographical analyses of
service locations, ratios of physicians and other health personnel to
population, and the kinds of compilations of data on service facilities
published routinely by planning agencies and professional associations.
Data on predisposing factors and reinforcing factors are collected most
efficiently through sample surveys, ideally with standardized
instruments to maximize the comparison potential. Another useful
measure in relation to reinforcing factors is participant observation:'
When the total program is the object of interest for evaluative
purposes, measures for comparison usually consist of a combination of
the foregoing measures. In addition to measuring achievements,
program evaluation may also include measures of the planning process
and the structure of the program. Criteria for comparing the
appropriateness of various steps in the planning process are available
from educational and administrative theory. Each of these criteria can
be rated o r measured on a scale according to the degree to which the
program planners adhered to that principle of planning. Similar criteria
and rating scales can be suggested for the problem diagnosis stage, the
preparation stage, the implementation stage, and the evaluation stage of
program development.'
The best all-purpose approach to measures of comparison for
programs as a whole is the experimental approach, or more commonly,
the social quasi-experiment in which the program is directed at a sub-
population and the results compared with those in another sub-
population that does not receive the program. This method essentially
involves comparison of the program with a situation in which no
program is conducted. Variations on this approach allow for
comparison with situations in which modified versions of the program
are conducted. If the recipients can be randomly assigned to different
program or nonprogram treatments, a true experimental design is
possi ble.5
38
Downloaded from heb.sagepub.com at The University of Melbourne Libraries on June 4, 2016
As a general principle of evaluation of specific educational
techniques and methods, I would prefer that the total program be
established and operational before the specific technique or method
that is the object of interest is evaluated. This would be d o n e then by
withdrawing that element from selected (randomly identified) sub-
groups within the population receiving the total program. This
approach is fundamentally at variance with the usual approach t o
evaluating health education techniques and methods in that it treats
them within the total context of the program rather than by setting u p
an artificial or partial program that revolves around the specific method
or technique. Thus, when a pamphlet is the object of interest, it seems
to me inappropriate to evaluate the effectiveness or utility of a
pamphlet by setting up a program that consists of virtually nothing
other than the pamphlet. T h e pamphlet can more appropriately be
evaluated within the context of a program in which it might be used.
That evaluation would require introducing the pamphlet into a total
program and then selectively withholding the use of the pamphlet from
subgroups within the program.
Social psychologists usually evaluate specific motivational appeals as
the object of interest (for example, comparing fear arousal with prestige
appeals) in a laboratory or other highly controlled situation in which
the message is the total program. I doubt the utility of such evaluations
and suspect that they have limited generalizability t o the community
settings in which the appeals o r messages are t o be used in the context
of a range of activities constituting a program.
As a consequence of the inductive, segmental approach t o evaluating
health education techniques and methods, the results are more negative
than they should be. When a social psychologist attempts to change a
health belief in a laboratory situation using nothing more than a film o r
a written message o r other communication device and finds o n follow-
up that the belief has not changed, he then calls into question the utility
of health education in general. The readers of such reports frequently
cite these studies as evidence of the limited efficacy of health education
programs. One could argue that such studies in n o way call into
question the utility of health education because the objects of interest,
i.e., the independent variables in such studies, are not health education.
They are nothing more than specific media, messages, or appeals that
may be used as aids in health education programs. Thus, the error is
both methodological and conceptual, and the fallacy is o n e of
induction.
The misplaced precision of sophisticated evaluative studies done o n
trivial health education programs yields reviews of the literature with a
negative picture of health education effectiveness. Health educators
should resist the pressure to evaluate before they have sufficiently
39
Downloaded from heb.sagepub.com at The University of Melbourne Libraries on June 4, 2016
developed their programs t o make them worthy of evaluation and t o
make the results more predictably positive. There is no point, it seems
t o me, in making large expenditures o n the evaluation of half-baked
programs unless such programs are typical of all programs in which
large amounts of money are being expended and the purpose of the
evaluation is t o bring an end t o such expenditures.
Standards of Acceptability: Administrative Problems
The objectives of health education programs are stated in terms of
“desired results.” By what standards does a community o r an adminis-
trator determine desirability? In reviewing the literature, I have tried t o
classify studies according t o the implicit standard of acceptability that
is used when comparisons are made, as they inevitably must be in order
for the study in question to be truly classified as an evaluative study?
1. Historical standards of acceptability, implicit or explicit, are
applied when the comparison is between different points in time for the
individual, the population, the problem, the program, o r the technique
that represents the object of interest. “How does our program this
month compare with last month?” “How is the program progressing
compared with last year?” It is the standard employed when one plots
trend charts with units of time o n the abscissa (“X” axis). The “Y” axis
may be the number of persons visiting the clinic, percentage of positive
responses o n a survey instrument, or other dependent variables between
different points in time. Deniston and Rosenstock have recently
examined the limitations of evaluations that depend o n time
comparisons.’
2. Normative standards are implied when the evaluation asks the
question, “How does this program compare with others?” “How does
o u r program c o m p a r e with the national average?” Regional
comparisons are used quite frequently as the norm because comparing
programs within the same region equalizes the influence of many
extraneous variables. Thus, it is more reasonable to compare patient
education programs in similar types of hospitals than to compare
similar programs in entirely different settings. The norm selected for
comparison can make the difference between judging a program a
success o r failure, correctly or incorrectly. This is true, of course, for
all of the standards of acceptability. The ideal comparison group in
applying normative standards is a true control group in an experimental
design.
3 . Absolute stundurds are employed when program administrators or
policy boards set 100 percent solution of a problem as a goal. The
object of interest may have to be compared against that standard of
acceptability regardless of how unrealistic i t may be, o r how
unachievable it is.
40
Downloaded from heb.sagepub.com at The University of Melbourne Libraries on June 4, 2016
4. Throreticul .stundurcl.s, based on theory and previous research,
compare achievements with what we would expect to achieve if
everything went right in the program. Theoretical standards are
sometimes referred to as professional or scientific standards. They are
usually higher than normative or historical standards but lower than
absolute standards of acceptability.
5 , Nrgofiufrd stundurds are usually somewhere between the
corresponding theoretical standard and the absolute standard for the
same object of interest. If a consumer group or a political action group
has set an absolute standard, an administrator has set an arbitrary
standard, and a consultant or funding agency has set a theoretical
standard for a program, there may follow a process of compromise in
which an intermediate standard is negotiated. This then becomes the
goal for the object of interest and is a standard of acceptability against
which the ob.ject is evaluated.
6. Arbirrury standards are the opposite of theoretical standards in that
they are usually based o n a complete lack of information rather than a
thorough analysis of information. They may be set by an administrator
in the absence of consultation o r previous experience with the problem,
or for purposes of providing the staff with a target and setting quotas for
staff performance, even though it is not known whether the staff can be
expected to achieve them.
The state of the art in health education, it appears, leaves us most
frequently with historical and normative standards and, in the case of
new program areas, often with arbitrary and absolute standards. As we
accumulate a body of literature, we should begin to formulate
theoretical standards that can be employed in the evaluation of health
programs. Such a task, then, is one of the major purposes of this
presentation. Having defined evaluation and discussed the elements of
health education planning and programming in relation to this defini-
tion, I will now attempt to pull together some specific examples from
the literature that might lead us to the formulation of some theoretical
standards for comparing one kind of health education program against
another and against alternative strategies for solving various health
problems.
41
Downloaded from heb.sagepub.com at The University of Melbourne Libraries on June 4, 2016
EDUCATIONAL BEHAVIORAL MEDICAL OR
INPUT (Cost) OUTCOME ADMINISTRATIVE BENEFIT
43
Downloaded from heb.sagepub.com at The University of Melbourne Libraries on June 4, 2016
the fears, loyalties, desires, or aspirations of the patient, linking these to
the goals of the program, or it may take the form simply of reducing the
boredom of sitting in a waiting room or convalescing in a hospital bed.
Whatever the form, the major benefit t o be attributed to such efforts
would be indirect effects on increasing patient knowledge, attitudes,
beliefs, and satisfaction. In this sense the effects of arousing interest are
supportive t o informational inputs.
Communication with relatives or significant visitors is another low-
cost educational input that should be expected to yield substantial
increases in social support for the patient’s compliance with medical
regimens.
Outreach programs, o n the other hand, cost considerably more, but
can also have more extensive impact through primary prevention,
reduction of delay in diagnosis and treatment (secondary prevention),
and the recruitment of patients into preventive care channels (family
planning clinics, immunization and well-baby clinics, dental clinics,
screening programs, etc.). An example of cost-benefit analysis in
relation t o a health education outreach program is the recent publica-
tion of the Multiphasic Checkup Evaluation Study of the Kaiser-
Permanente Health Plan in which an outreach effort costing an average
of $4 per year per man (ages 45-54) contacted and urged to obtain an
annual health examination resulted in a net economic benefit of $822
per man over a seven-year period.’”
In-service training, as a specific educational component of staff
development, has widely varying costs. But the emphasis placed on this
component by industry suggests that it has demonstrated its cost-
effectiveness. In Figure 2 the benefits of training are identified in
relation t o the patient, but they can be equally related to administrative
benefits derived from other changes in staff behavior. For example, staff
can be trained t o perform specific tasks more efficiently (saving time)
and less wastefully (saving material). I n a patient education program,
in-service training is aimed at improving the ability of staff t o
communicate with patients more effectively, to organize the learning
experiences of patients more effectively, and to meet the subjective
needs of patients to their greater satisfaction. The products of these
objectives should be the same medical and economic benefits listed for
other educational inputs in Figure 2.
Community organization has as its immediate objective the improve-
ment of the “enabling factors” outlined in Figure 1 . This is
accomplished through coordination of resources between agencies and
institutions, the improvement of referral mechanisms, the development
of new services o r the relocation of old services to make them more
accessible, and the involvement of community groups in planning or
44
Downloaded from heb.sagepub.com at The University of Melbourne Libraries on June 4, 2016
developing their own resources. These community organization goals
can be achieved a t moderate t o high costs, relative to other compon-
ents of health programs, but the medical and administrative savings
should also be high if the expected benefits shown in Figure 2 are
realized.
Mass communications are usually expensive retative to other program
components. They are inexpensive when calculated o n a per capita
basis, but the denominator in a per capita calculation of mass media
costs gets smaller when defined in terms of behavioral outcomes. A
recent demonstration-evaluation by Richard Udry and associates
computed the per capita cost of patients recruited to family planning
clinics by a mass media campaign to range from $75 t o $5,000 per new
patient.”
Follow-up contacts with former patients represent another form of
outreach activities in many programs. Costs t o the program can be
reduced by having the patient return for the follow-up visits at the
hospital or clinic, but this is really a “transfer payment” rather than a
cost saving. If health education personnel visit former patients at their
homes at critical intervals following the prescription of a medical
regimen, they can substantially influence the adherence by the patient
to the medical regimen. Long-term medications such as penicillin
prophylaxis for rheumatic heart, oral contraceptives for birth control,
special diets, insulin for diabetes, etc., are known to have maximum
“drop-out’’ times following prescription. In the case or oral contracep-
tives, for example, the point of maximum discontinuation is o n e month
after the first pill is taken. The woman usually has experienced during
the first month the worst side-effects to be expected from the
medication and frequently decides not to start the second month
because she expects these side effects to continue, is unduly frightened
by them, and has not yet established a habit pattern in taking the pills. A
home visit at this moment is far more effective in reinforcing the
continuation than at any other time. Optimal follow-up points can be
identified similarly for other medical regimens in order to maximize
cost-effectiveness for this component of health education. Cost-benefits
can be expected to improve with increases in cost-effectiveness.
One must insist o n a very limited interpretation of Figure 2. The
partitioning of educational inputs is not intended to suggest that any of
these should be expected by themselves, in isolation from other inputs,
to achieve the outcomes listed o n the right. I have already suggested
that they should first be evaluated for their cost-effectiveness by
establishing programs containing all or most of the elements, as
appropriate, and then selectively withholding specific educational
inputs from random samples of patients to measure their contribution
to the behavioral outcomes in the middle column.
45
Downloaded from heb.sagepub.com at The University of Melbourne Libraries on June 4, 2016
A COST-BENEFIT APPROACH TO
THE RESEARCH LITERATURE
47
Downloaded from heb.sagepub.com at The University of Melbourne Libraries on June 4, 2016
Another kind of policy utility that can be sought in such an analysis is
the condirionuf probability of success for health education under
specific circumstances which are subject to administrative or legislative
change. By analyzing the differential impact of health education under
different organizational or financial conditions, for example, it should
be possible to draw policy implications for the kinds of administrative
arrangements which increase or decrease the probability of success with
health education.
The educational strategies employed in each study reviewed must be
examined for their cost elements and their outcomes translated in units
of medical or administrative savings. These could then be extrapolated
and standardized to current costs and alternative financing mechanisms
in order to derive estimates (ranges) of current cost-benefit ratios for
different types of policy decisions related to preventive and primary
health care.
There has been no comprehensive review of research related to health
education practice since the six-volume series published in Health
Education Monographs by Marjorie Young covering the literature from
1961 through 1966. There was a minimal effort in her review to relate
the outcomes to savings in primary health care. The trend in
research related to health education during the 1960’s however, has
been increasingly directed toward behavioral change in the utilization
of primary health care services and in compliance with medical
regimens.
The late Secretary General of the International Union for Health
Education, Louis-Paul Aujoulat, took the position that it is not possible
to single out the specific contribution of education in health programs,
and he argued against efforts to evaluate the cost-effectiveness of health
education. His arguments, in fact, would lead us to abandon any
attempt to evaluate health education.’-’ It would seem premature to
accept either his premises or his conclusions until we have gone
through the steps outlined above and have found no evidence of
consistently positive results for some series of reasonable health
education programs. It would be equally wrong to assume that health
education does have economic pay-offs without the same careful
analysis. As another European has stated, “Nothing is more damaging
for a case than for it to be argued on specious or ill-conceived
grounds.”
In summary, the first step should be to organize, classify, and
inventory the past studies according to the educational strategy
employed (independent variables), the target population (subjects), the
behavioral objectives (dependent variables), the design, and the major
results. Major weaknesses and strengths of each study in terms of
internal validity need to be noted at this stage. The second procedure
48
Downloaded from heb.sagepub.com at The University of Melbourne Libraries on June 4, 2016
should be t o organize the studies according t o selected controlling
variables such as disease or problem category, socioeconomic status,
type of institutional or private practice, or community setting; and
other policy considerations. The studies could then be compared t o
determine their external validity in terms of the consistency with which
the results obtain across studies for specific educational inputs and
behavioral outcomes under similar conditions (controlling variables).
The steps, then, in establishing policy utility would be ( 1 ) t o obtain
from the literature a range of probability estimates for each class of
health educational strategies in terms of their success in achieving
specific improvements in behavioral indices related to primary health
care; (2) t o generate cost estimates for the relatively successful
educational strategies at current prices for health education manpower
and material; (3) to generate benefit estimates for the projected changes
in costs averted and health care indices; (4) t o examine conditional
probabilities when the variance in step o n e is large, and to repeat steps
two and three for major classes of conditional probabilities that would
be subject to administrative control; and ( 5 ) to examine the reasons for
failure where the probabilities in step o n e are uniformly low and to
suggest other preventive measures, environmental controls, economic
incentives, reorganization or legislative changes as indicated by the
inferred reasons for failure.
SOME EXAMPLES
In further refutation t o Dr. Aujoulat, I would offer as evidence that
cost-benefit analyses of health education are possible, some examples
of studies undertaken at Johns Hopkins in the past few years.
The Point of’Diminishing Returns
In a collaborative study with the University of Maryland, we found
that home visits by nutrition educators to rural poor homemakers, when
limited to nutrition education, did not continue to yield substantial new
increments in knowledge, attitudes, or behavioral changes after the first
year. We designed our study for longitudinal (historical) and normative
comparisons and concluded, at least for this kind of population with
this kind of program, that it is economically better over a three-year
period to spread a limited number of health educational aides over 300
families with 100 new families added each year than to spend all three
years with 100 families. The importance of this kind of study is best
illustrated by the national program to which this particular study was
generalized, the Expanded Food and Nutrition Education Program.
This program served more than 600,000 families (2.9 million people)
nationally in its first two years. Based o n an average annual salary of
$6,000 per nutrition education aide, 40 home visits per months by each
49
Downloaded from heb.sagepub.com at The University of Melbourne Libraries on June 4, 2016
aide, with each family visited once per month, and assuming a constant
rate of visits over time for a given family, we estimated from our data
that the program could serve three times as many families over a period
of three years at a savings of approximately $300 per family. Thus, in
the first two years of the national program, 1,200,000 families rather
than 600,000 families presumably could have received the same
educational impact by limiting the home visits to one year rather than’
continuing them for two yearsi5 Health education as a professional
discipline needs studies addressed in this way to the “point of
diminishing returns” in educational impact.
Hypertension Studies
Another series of studies has been developed at Johns Hopkins to
assess the ability of physicians and other personnel to influence the
compliance of hypertensive patients with their medical regimens. The
arithmetic in extrapolating o n e kind of cost-benefit assessment for a
community health education program for this disease is rather
straightforward with a limited number of assumptions. Considering that
the hypertensive is a good candidate for complications and noting that
studies have shown that such complications can be as high as 50 percent
of those with untreated hypertension, the cost of these complications
begins t o assume major proportions. It is estimated, for example, that
I 1,288 people in Alexandria, Virginia may have hypertension.
Approximately 5,000 in that number may be assumed to have extremely
high blood pressure and at least 50 percent of those may be expected to
have complications that will require hospitalization. If there were 200
strokes and 50 coronary infarctions in this group, at current hospital
rates of $100 per day and 25-day average hospitalization for these
conditions, the projected cost for untreated hypertension in Alexandria
is $625,000 for the acute care alone. This does not include the cost of
rehabilitation which may require years in many of the stroke cases.16
It is estimated that the total cost of the program, with a brief but
intensive, city-wide, screening program with volunteers and nursing
students manning street-corner blood-pressure checking stations, will
be $10,000. If we assume that only half of the 5,000 high-risk cases are
identified, and only half of those are successfully treated, and only half
of those remain on their medication and maintain lowered blood
pressure levels, we should expect a 12.5 percent reduction in the
projected $625,000 cost for acute care of future strokes and coronary
infarctions. T h e cost-benefit ratio for the health education program
should thus be estimated at $10,000: $78,125, or $7.8 1 saved for every
dollar invested in the program.
These computations, of course, are largely u priori, without the
adjustments t o be discussed later, and are presented primarily for
51
Downloaded from heb.sagepub.com at The University of Melbourne Libraries on June 4, 2016
was n o improvement in blood pressure control by patients of the
control group of physicians.
Asthma Study
In another experimental study, asthmatic patients from the East
Baltimore inner city area who utilized the emergency room of the Johns
Hopkins Hospital were randomly assigned t o experimental and control
groups. Patients assigned t o the experimental group were brought into a
single small group discussion-decision session designed t o encourage
self-confidence in coping with asthmatic attacks with proper
medication. To investigate the effects of this program, we had the
participants and a control group maintain daily symptom diaries for
five weeks and their emergency room utilization was monitored for four
months.In
The important result of this study was the decreased emergency room
utilization by the group discussion participants as contrasted with the
control group. Without specifically discouraging the use of the
emergency room, significantly smaller numbers of asthmatics in the
experimental group were treated in the emergency room for at least five
weeks after the educational session. Of greater concern from a cost-
benefit standpoint is the effect not on number of patients but on the
cumulative total number of visits to the emergency room for asthma
treatment. The control group during the original four months of this
study had at least twice as many visits as the experimental group during
all but the seventh week, when the ratio was only slightly less than two
to one. T h e absolute difference in the cumulative number of visits
between the experimental and control groups increased from 13 visits
at five weeks t o 5 5 visits at four months.
The most conservative assumption o n e might make in extrapolating
from these results would be that the difference (the reduction in
emergency room visits) disappears after four months. I n fact, our most
recent check of the billing records found 20 visits by the control group
in the 1 I th and 12th months following the program and only eight visits
by the experimental group. But a minimal estimate of impact is possible
without making assumptions beyond the original experimental data.
Taking the four-month reduction of 5 5 visits for 26 patients, o r
approximately two visits per patient, as the most that should be
expected of a single educational program, and five patients as the
optimal number for each group discussion, some specific cost-benefit
ratios can be derived and generalized.
An emergency room visit is currently estimated to cost $20.* Whether
*One o/' the problems in cost-benejit unulysis is in usskssing the true benejits to be
uscribed to the specific hehuviorul chunge or diseuse cutegory. The $20 per emergency
room visit uverted is un uveruge Jor ull emergency room visits. Asthmatic visits muy
actually cost more or less thun $20.Iq
53
Downloaded from heb.sagepub.com at The University of Melbourne Libraries on June 4, 2016
not an accident of history or hybrid corn. It is a dynamic representation
of the same natural law that applies to other biological, psychological,
and sociological phenomena that gave birth to statistical theory. When
a community health education program is planned, it is designed
usually to appeal to the common motives of the largest number of
people. I n so doing, it necessarily misses the unique motives of large
numbers of individuals. A Community program also aims to remove
"enabling" barriers for the ma.jority, but cannot deal with the unique
barriers for many individuals.
The net effect of these limitations is that most programs cannot expect
more than 50 percent of the target population to change behavior
unless the program reaches down to the individual level and interacts
with the unique motives and barriers of individuals. This tends to
violate the conditions of experimental designs set up to evaluate
community programs. Hence, the evaluation of community health
education programs usually ends at the point when the evaluation
design must be abandoned in order to reach the second 50 percent of
the population. This is perhaps why the point of diminishing returns
has not been well documented in health education.
There are additional explanations that can be evoked. One is the
feedback mechanism by which health personnel and consumers or
patients tend to communicate their expectations for each other's
performance, resulting in reinforcement of negative behavior in the
"negative" half of the population, which in turn reinforces the negative
attitudes (stereotypes) and behavior of health personnel toward them.lu
In a series of studies, we are beginning to get a clearer picture of the
part that the ascribed characteristics of patients play in the behavior of
doctors and nurses toward them. In an analysis of carefully ob-
served and recorded behavior of nurses in a sample of nursing
homes, for example, Caroline White has documented that supportive
measures by nurses are significantly correlated with those attributes of
patients which would make them least likely to need support. Nurses
tend to give less support to those patients who are in greater need of
support as measured by their ability to perform basic activities of daily
1iving.l' A similar study by Sandra Hellman focused on the cues to
which dentists responded in giving information to the patient in the
dental chair.lL The point of these studies is related to the "reinforcing
factors" noted in the beginning of this paper. There tends to be a
vicious cycle of action and reaction that mitigates against improvements
in the already disadvantaged half of the population. Elling discussed
this phenomenon in terms of the "reflexive self-concept" which
develops in the patient after his encounter with clinic personnel.lJ
Similar processes have been documented in the lay referral system for
preventive health actions?
54
Downloaded from heb.sagepub.com at The University of Melbourne Libraries on June 4, 2016
A Modified Hovlund Efjectiveness Index
Whatever the explanation, it does appear from experience, from
evaluative studies, and from diffusion theory that it gets harder to
obtain change the farther one moves through the population. It is for
this reason that I have preferred as a comparative measure of success in
health education evaluations a modified version of the effectiveness
index originally developed by Hovland and his associates.ls We used
this measure in relation to success levels for 13 different criteria
(knowledge, attitudinal and behavioral change) for experimental and
control groups and subgroups within the major groups in the Dacca
Family Planning Experirnent.l6 This index enabled u s to standardize all
measures of change while controlling for the “ceiling effect,” which has
been the concern of the foregoing discussion. The modified
effectiveness index (EI) is computed as follows:
E;= ‘2
100 - P1
where P, is the percentage positive before the program, E2 is the
percentage positive after the program. (“Positive” is defined by the
behavioral ob.jective of the program.) Thus, the numerator consists of
the actual change, and the denominator the potential change. The
quotient represents the proportion of those not previously positive who
became positive during the program. Note that the quotient is not
multiplied by 100 as in the original Hovland index because we want the
index expressed as a probability estimate with a value between zero and
one for purposes of adjusting estimated potenriul benefits by a fraction
representing proportional effectiveness.
A Cost-Benyftt Index
This measure of effectiveness may be incorporated into cost-benefit
concepts if we can agree that each additional percentage of progress
(i.e., each new beneficiary) should be given greater economic weight
than the last.* Accepting this essentially philosophical, ideological, or
political premise, the equation for a cost-benefit index that goes beyond
a simple cost-benefit ratio would be:
CBI = EI (BIN - C/N)
where C B I is a standardized cost-benefit index that can be compared
between programs, places and times; E l is the effectiveness index
described above; B is the potential benefit, in dollars, which would
accrue to the institution or the community sponsoring the program if
everyone responded as intended by the program; N is the target
population, i.e., the number of people or families “at risk” who might
have responded to the program; and C is the total cost of the program.
The result then is the net potential per capita benefit (in parentheses)
reduced to the proportion of previously ”negative” individuals or
*There is some support within economics f o r the concept thut the value o f u gain is
dependent on the sturting point.”
55
Downloaded from heb.sagepub.com at The University of Melbourne Libraries on June 4, 2016
families who are converted to “positives” during the program.
Let us now apply this index to the example of the proposed health
education and screening program directed at hypertensives in
Alexandria. First, the projected potential benefit from preventing the
complications of hypertension in this community has been
conservatively estimated at $625,000. The number of people at risk of
complications is estimated to be the 5,000 Alexandrians assumed to
have extremely high blood pressure. The total cost of the program is
$10,000. Thus, CBI = EI ~($625,000/5000)-($10,000/5000)I.
The figures within the brackets constitute the net potentiul savings to
each of the “at-risk’’ individuals, with the actuarial risk shared equally
by all of those with very high blood pressures, even though only 250 of
them would be expected eventually to have strokes or coronary
infarctions. It becomes essentially an insurance premium if we could
charge each of them the $2.00 per capita cost of the program
($10,000/5000). The theoretical average per capita protection for that
$2 is $625,000/5000 = $125, a good investment indeed if the
protection is real.
In order to estimate the uctuul protection, we need to know how
effective the program will be. This case ideally illustrates the value of
the modified Effectiveness Index as a component of the cost-benefit
index because it takes into account the fact that some of the target
group of 5,000 may already be under medical care. The Alexandria
Community Health Center is currently treating 1,590 patients with
hypertension. We might assume that approximately 500 of those are
from the 5,000 Alexandrians with very high blood pressures and that an
additional 1,500 hypertensives are under the care of private physicians.
Thus, the percentage already under prophylactic care prior to the
program (PI)is 2000/5000or 40 percent. This means that the potential
change, 100 - PI, is 60.
Following the assumptions outlined in the original example, namely
that only half of the 5,000 high-risk cases are identified at the end of the
program, we would have:
E I = 50-40=1_0 =
100 - 40 60 ’17’ and
CBI= .17 ($125 - $2)= $20.91,
which is a more realistic indicator of the actual per capita value of the
program than $125. If the program were effective in reaching half of
those not previously identified, the CBI would be .50($123) = $61 S O .
These figures only become meaningful, of course, when compared
with alternatives. The task ahead, then, if we agree that the CBI
accurately reflects the cost-benefit utility of a program, is to construct
CBI tables contrasting the cost-benefit indices for different programs
under different assumptions. The data need not come from a single
source. Projected or theoretical CBI’s could be calculated with data
56
Downloaded from heb.sagepub.com at The University of Melbourne Libraries on June 4, 2016
from a variety of sources. The E l and per capita cost (CIN)components,
for example, could come from a demonstration project in one or two
communities, while the estimates of potential benefit could be
generated from national Blue Cross or Medicaid figures.
A n Experimentully Adjusted Cost-Benefit Index
An additional refinement to the C B I can be added if the source of the
modified E I is an evaluative demonstration study in which there was a
control group or comparison group not exposed to the health education
program. It would then be possible to subtract from the gross
Effectiveness Index the proportion of effectiveness attributable to
forces other than the program itself (e.g., historical trends, maturation,
regression effects, pretest effects, etc.). A separate Effectiveness Index
computed for the control group or comparison group could be
subtracted from the modified E l to yield a net effectiveness index:
NEI = EIe - EIc
= [ V z e - P l e ) / ( 1 0 0- PIe)] - [(f‘zc - P I C ) / (100 -PI c)l.
From this, an udjusted cost-benefit index more strictly attributable to
the actual (net) effectiveness of the program would be:
ACBI = NEI [(BIN)- (CIIV)].
Cumulative vs Annuul Cost-Benefit Indices
Finally, the index can be ad.iusted, defined or interpreted in relation
to the time span over which the costs are spread and the benefits from a
given expenditure are to be realized. In the Alexandrian example, our
estimated potential benefits were cumulutive in relation to the 30-year
average additional life span after diagnosis of hypertension. The costs
were limited to a one-year project budget of $10,000. I n this case, the
benefits are realized more at the end of the 30 years; whereas in
some programs the benefits are immediate. To ad.iust for these
differences, an unnuul cost-benefit index would be ad.iusted by dividing
the per capita potential benefits by the number of years over which the
cumulative benefits are realized ( Y b ) , and the per capita costs by the
number of years over which the expenditures are made ( Y , ):
ACBI, = NEI [ ( B / N / Y b )- ( C / N / Y , ) ] .
The subscript “u” is suggested to distinguish a one-year average
(annual) cost-benefit index.* Numerical subscripts could replace the
*Both unnuul und cumulutive CBI’s should be discounted ut u rute o f f i v e to seven
percent per yeur if Yb is greuter thun three o r four yeurs. Discounting is the opposite of
compounding, whereby future vulue shrinks ruther thun expunds by u Juctor usuully
estimuted us eyuul to the current interest rute. T h e discount formulu for the exumple here
Y
would be Bp = l @ l ( l -+ i ) J I . where Bp is the present vulue oj’all Juture
j = I
benefits: Bj up to By is the expected beneJit streum in ubsolute umount by y e w : und i is the
current rute of interest.
57
Downloaded from heb.sagepub.com at The University of Melbourne Libraries on June 4, 2016
"a" to indicate cumulative cost-benefits estimates. Thus, our original
cumulative estimate for the Alexandrian hypertension project without
discounting would be expressed as CBl:,,,= $20.91, whereas an average
per capita cost-benefit index would be calculated as follows:
CBIa = .17 [($125/30) - ($2/1)]
= .17 ($4.17 - $2)
= $ .37
The advantage of the annual per capita CBI is that it further
standardizes the index so that it is immediately comparable with all
other annual CBI's, regardless of the time it takes to realize the full
benefit. The advantage of the duration-specific CBI with the numerical
subscript is that it shows the cumulative benefit in relation to the years
to which the benefit is carried.
SUMMARY
1 . Evaluation is redefined as the comparison of an object of interest
against a standard of acceptability.
2. 0b.jects of interest in health education are identified in Figure 1
with the suggestion that any one of them or the relationship between
any two or more of them may comprise the ob.ject of interest in the
evaluation of health education.
3. Measures required for comparison of the objects are discussed in
terms of some of the major methodological problems in the evaluation
of health education.
4. Six types of standards of acceptability implicit in various
administrative approaches to evaluation are classified.
5 . Some theoretical relationships between educational inputs and
medical or administrative benefits are proposed in Figure 2 as a
possible starting point for economic analyses of health education.
6. Steps in approaching the research literature for purposes of
deriving cost-benefit data are proposed.
7. Some recent studies are presented as examples of the kinds of
research currently needed to strengthen the data base for cost-benefit
assessments of health education.
8. A cost-benefit index is proposed to identify the specific parameters
and statistics required to construct standardized estimates that would
allow comparisons between various health education strategies and
between different problems and populations in which health education
might be applied.
REFERENCES
I . Andersen R: A Behavioral Model of Families' Use o f Health Services. Chicago,
University of Chicago Center for Health Administration Studies, Research Series No.
2 5 , 1968.
58
Downloaded from heb.sagepub.com at The University of Melbourne Libraries on June 4, 2016
2. Green LW: Status Identity and Preventive Health Behavior. Berkeley. University of
California School of Public Health, Pacific Health Education Reports No. I , 1970.
3. Phillips DL: Knowledge From What? Chicago, Rand McNally, 1971.
4. Green LW, Young MAC, Sollad R: Criteria and Rating Scales for the Assessment of
Program Structure and Process. New York, Macro-Systems, Inc. Consultation Report
for U.S. Department of Health, Education and Welfare, 1972.
5 . Green LW, Figi-Talamanca I: Suggested designs for evaluation of patient education
programs. Health Educ Monogr 2( 1):54-72, Spring 1974.
6. Green LW: Evaluation of patient education: considerations and implications.
Proceedings, Workshop on Patient Education Programming. Rockville, Health Care
Facilities Service, Health Resources Administration, DHEW Pub. No. (HRA)74-002,
1973.
7. Deniston OL, Rosenstock IM: The validity of nonexperimental designs for evaluating
health services. Health Services Rep 88:153-164, February 1973.
8. Klarman HE: Application of cost-benefit analysis to health systems technology. In
Collen M F (ed): Technology and Health Care Systems in the 1980's. Rockville,
National Center for Health Services Research and Development, DHEW Pub. No.
(HSM) 73-3016, 1972.
9. Bernstein L, Dana RH: Interviewing and the Health Professions. New York,
Appleton-Century-Crofts, 1970, pp 6-14.
10. Collen MF, Dales LG, Friedman G B et al: Multiphasic checkup evaluation study, 4.
Preliminary cost-benefit analysis for middle-aged men. Prev Med 2:236-246, June
1973.
1 1 . Udry JR et al: Can mass media advertising increase contraceptive use? Fam Planning
Perspec 4(3):37-44, July 1972.
12. Campbell DT, Stanley JC: Experimental and Quasi-Experimental Designs for
Research. Chicago, Rand McNally and Co., 1969.
13. Aujoulat L P L'education pour la s a n k : peut-on en evaluer la rentabilitd. Inter J
Health Educ 16:21-27, 1973.
14. Teeling-Smith G: The economic aspects of preventing disease and promoting health.
I n Baric L (ed): Procedings of a Conference on Behavioral Sciences in Health and
Disease, The Health Education Council, London. Published by the Int J Health Educ,
Geneva, 1972, p 106.
15. Green LW, Ephross PH, Wang VL: A three-year, longitudinal study of the impact of
nutrition aides on the knowledge. attitudes and practices of rural poor homemakers.
Amer J Pub Health 64, 1974 ( i n press).
16. Rice D P Economic Costs of Cardiovascular Diseases and Cancer. Health Economics
Series No. 5 , DHEW, Washington D.C., Government Printing Office, 1965.
17. lnui TS: Effects of Post-Graduate Physician Education on the Management and
Outcomes of Patients with Hypertension. Baltimore, Johns Hopkins University
School of Hygiene and Public Health, unpublished Master of Science thesis, June
1973.
18. Avery C, Green LW, Kreider S: Reducing emergency visits of asthmatics: an
experiment in patient education. Presented as testimony before the President's
Committee ?n Health Education, Regional Hearings, Pittsburgh, Pennsylvania,
January 1972.
19. Klarman H: The Economics of Health. New York, Columbia Univ. Press, 1965, p
166.
20. Green LW: Should health education abandon attitude change strategies? Perspectives
from recent research. Health Educ Monogr 30:25-48, 1970.
21. White CS: Patient Characteristics and Behavior of Nursing Personnel in Nursing
Homes. Baltimore, Johns Hopkins University School of Hygiene and Public Health,
unpublished Dr. P.H. dissertation, 1974.
59
Downloaded from heb.sagepub.com at The University of Melbourne Libraries on June 4, 2016
22. Hellman S: The Dentist and Preventive Dental Health Information. Berkeley,
University o f California School of Public Health, unpublished Dr.P.H. dissertation,
1972.
23. Elling R, Whitternore R, Green M: Patient participation in a pediatric program. J
Health Soc Behav 1:183-191, Fall 1960.
24. Rogers EM, Shoemaker FF: Communication of Innovations: A Cross-Cultural
Approach. New York, The Free Press, 1971, pp 131-132.
25. Hovland CI, Lumsdaine A, Sheffield FD: Experiments onMass Communication. New
York, John Wiley and Sons. 1949.
26. Green LW, Gustafson HC, Griffiths W et al: The Dacca Family Planning
Experiment: A Comparative Evaluation of Programs Directed at Males and Females.
Berkeley, University of California School of Public Health, Pacific Health Education
Reports No. 3, 1972.
27. Thurow L: Investment in Human Capital. Belmont, Mass., Wadsworth, 1970, p 134.
60
Downloaded from heb.sagepub.com at The University of Melbourne Libraries on June 4, 2016
this should be done, and it seems to me that this approach is more fair
to the holistic concept of health education, and to the community
nature of the settings in which the evaluation has to be carried out.
The second section - the second paper within the paper - deals
with cost-benefit analysis. The second part of the paper starts with
Figure 2, which attempts to partition out the elements of health
education programs and relate them to possible benefits that might be
derived from them. This is based primarily on theory and, to a large
extent, on experience rather than on data. But there is data to support
many of the arrows postulated in this figure. In relation to each arrow
on the left, a very gross and tentative estimate of relative cost is
suggested. Those costs can be related to very gross, tentative, and
theoretical estimates of benefit in economic terms, that might be
realized, as indicated by the arrows on the right.
But returning again to my suggestion that the evaluation should be
done comprehensively rather than segmentally, I would insist that this
table not be interpreted as a separation of variables for purposes of
setting up laboratory designs or experimental programs that attempt
exclusively to relate one arrow on the left to one arrow on the right, but
rather that all of the elements listed on the left are critical to the
development of a comprehensive health education program in a
medical-care or community setting. Thus, one would need to develop
all of the elements on the left and then selectively evaluate them in
relation .to the elements on the right by withdrawing them one at a time
from random subsamples of the patient population or the community.
We can skip section 3, which discusses the approach to the research
literature in this area. I referred in that section to a competitive grant
we had applied for to the National Science Foundation. We didn’t get
the grant, so let’s skip section 3.
Section 4 deals with some examples - we can skip that.
Section 5 is the proposed cost-benefit index. My purpose i n this
section was to sort out the elements of an appropriate index for health
education that can be standardized for economic comparisons of one
program against another, or a program in relation to one disease versus
another. By isolating the parameters, we can then know what data have
to be collected in order to make such economic assessments. A ma-jor
feature of the index is that its use is not dependent on getting all of the
data from one source, from one study. I think the thing that scares
people the most about trying to do cost benefit assessments is that they
assume they have to set up a study design to get all of the data - the
economic as well as the effectiveness data - from one situation. I am
proposing an index in which the cost and the benefit data can be
derived from sources independent of the specific study in which one
determines what the effectiveness of a program is, so that there are two
61
Downloaded from heb.sagepub.com at The University of Melbourne Libraries on June 4, 2016
halves t o the equation: there is the half in the parentheses o n the right
that represents the theoretical, porenfiul benefits that might be derived
after you subtract the cost from the total potential benefits. The section
o n the left of the parentheses -
o n the right of the equal sign, but o n
the left of the parentheses - is the effectiveness index, which reduces
the maximum potential benefits by an amount proportional to what we
can expect o n the basis of effectiveness studies. I had thought of going
through this o n the blackboard, but I’d rather use the time now to
answer questions that may have been raised by the paper; ideas you have
in response t o the suggested index; what potential it may have, as you
see its application in the field.
Let me apologize again -
I see faces dropping -for the length of the
paper. I do hope to tighten it u p some. Miss Skiff! Does everybody have
a copy of the paper?
Q: Yes, Larry, that’s what I was going to say. Some people, I think,
didn’t receive a paper yet in the mail.
A: Is that right?
Q: And they might be fascinated by watching you run through o n e
exercise with your equation.
A: (Dr. Green presented an example of the derivation of a cost-
benefit estimate for a hypothetical program.)
Q: Have you applied this formula t o the patient education program
you conducted for asthmatics at Johns Hopkins?
A: I haven’t yet. The asthmatic experiment is described in the paper.
What we did in that study was to base o u r cost-benefit analysis entirely
on the reduction of emergency room visits, each of which is estimated
to cost $20, so that the potential benefit for the program would be the
number of people who are visiting the emergency room times 20 for
each time they visited the emergency room. That would be the potential
total-dollar-benefit of the program. We got a certain level of
effectiveness, and I can’t express it now in CBI terms because I haven’t
calculated it in that way, but let’s say we got about 50 percent
effectiveness in reducing emergency room utilization. So the total
potential benefits of the program could be multiplied by 50 to get the
cost-benefit index for the program. What we did, instead, was to take
the total cost of the program as a ratio to the total benefits of the
program as measured by the difference between the experimental
groups’ visits over a period of time following the program and the
control groups’ visits. T h e cost-benefit ratio when we did it that way
came out to about $10 saved for every dollar spent on the program.
Q: How would you handle the fact that in many situations, by paying
attention to the administrative setup of the service and the attitudes
which those who provide service bring to those they serve, we’d affect
the success of the service provided? In other words, there would be
62
Downloaded from heb.sagepub.com at The University of Melbourne Libraries on June 4, 2016
fewer broken appointments, and there would be more willingness to
stay with the suggestions made in the way of treatment or modification
of life style. Would you see this as an educational program?
A: Yes.
@.And yet, it’s one which reflects the attitude of those who have
responsibility for administering the service, and is, in a sense, a way of
life that they would like t o see prevail in their service.
A: I think that what you are saying is that it represents the success of
an educational program if you’ve effectively changed the attitudes and
behavior of staff in relation to patients. I’ll agree. In Figure 1 I have
identified that set of factors as “reinforcing factors.” I have indicated
that in-service training, staff development, consultation, continuing
education, and supervision all have educational elements and are part
of the total educational program. Education can take credit for changes
in attitudes and behavior that may result from those efforts. That, then,
is an input, but for cost-benefit purposes you still have t o measure the
changes in patient behavior, or consumer behavior, in response to those
attitudinal changes. Look at Figure 1, if you will.
Q: How would you put numbers o r values o n this thing?
A: Your cost figures for in-service training can be calculated from
your budget for those parts of the total health program that represent
staff development. All right, you’ve got the cost element now. The
benefits t o be derived from that are behavioral changes in the target
population as indicated by various behavioral indicators - utilization,
preventive actions, consumption patterns, compliance, and so forth, so
that you can total up the potential benefits t o be derived from that. Now
you’ve got the figures in parentheses. Now t o find out whether the
program is effective or not, you could compare that program in which
you have conducted those kinds of activities with a control group in
which there was no staff development effort; or you could find out what
the behavior is before you start the in-service training and compare it
with the behavior after you have done the in-service training o r staff
development. Then you’ve got P, and P2, and you can compute an
effectiveness index. An adjusted effectiveness index could be computed
when you have both pretests and posttests in an experimental group and
a control group, so that you’d subtract the effectiveness index for the
control group from the effectiveness index for the experimental group.
Q: I was reminded after reading your paper of the school health
study. We found that if we changed administratively the number of
children that a physician can see in a n hour in other words, changed-
the administrative environment in which he worked -and developed a
program whereby the parent must be present, there was then sufficient
time for interchange between the physician and the parent, and the
nurses and the patient. Thus, we cut down the amount of follow-up that
63
Downloaded from heb.sagepub.com at The University of Melbourne Libraries on June 4, 2016
had t o be done, and we got something like 90 t o 95 percent compliance
with suggestions when the parents were present, as against a very small
percent when the parents weren’t present. There’s a cost-benefit in
terms of benefit t o the child; a cost-benefit t o the program, saving nurse
time, which is her salary, etc.
A: I think that’s a very good example of how t o attach values, which
was the question you posed -
so you had an answer for it when you
posed the question. You can take a change in staff or administrative
arrangements and come u p with a cost estimate for that change, and
relate it to the benefits that are derived from it.
Q: While we are talking about cost-benefit analysis, the implication of
much of what we say is that the input is a matter of communications
input strictly; that is, we have an educator who communicates some new
concepts or facts t o a recipient. And yet, I think that there is another
type of program that we have been experimenting with, which is in a
sense more encompassing and more in line with a kind of therapeutic
community; and that is the health maintenance program, where we have
specially trained nurses (i.e., PRIMEX), who have a much more active
therapeutic role than the average nurse and take more initiative and
together with the physician and the patient, develop a total management
plan for that patient and his or her illness. This kind of education is not
just a matter of communicating ideas, but it is actually putting the
patient into a situation where he has somewhat fewer options about
doing the wrong thing and more options towards doing the right thing.
This is, perhaps, somewhat in line with what Dr. Davis was talking
about. It seems to me that the possibilities of estimating cost:benefits,
taking this whole constellation into account is a much simpler model
than that of trying to incorporate the patient into the whole system.
A : Well, o n e of the issues in cost-benefit analysis, it seems to me, is
whether we should account for the extra time we ask patients, o r
consumers, to take t o participate in the planning of the program -how
do we count these hours? D o we multiply their wages by the hours they
are involved in the program’? What about their transportation and
baby-sitting costs? I think we have to resolve some of those issues in
deciding how to total up the costs of a program. How about the patient’s
expenditure for prescription drugs? That is an additional cost to the
patient for taking up the behavior that we recommend. D o we add that
to the cost side‘? So we have to go beyond, I suspect, in the final
analysis, the budget figures that we have for conducting the program;
and we have got t o look at the marginal cost to other parties involved.
But I think we can also add to o r take from the benefit side an account
of programs that a health education program might replace. Certain
kinds of unnecessary activities in our programs may be averted by the
success of the health education program.
64
Downloaded from heb.sagepub.com at The University of Melbourne Libraries on June 4, 2016