DOI: 10.18267/j.pep.380
CONTINGENT VALUATION: PAST, PRESENT AND FUTURE
David Hoyos, Petr Mariel*
Abstract:
This paper summarizes the long history of the contingent valuation method, stressing the
important dates and events that influenced its economic applications. It reviews the economic
theory of contingent valuation, highlights the related survey design, alludes to the econometrics
methodology involved and discusses the validity and reliability of this method. In summary, this
paper presents the state of the art of a method that has been applied in the economic valuation of
natural resources for many decades.
Keywords: economic valuation, stated preference information
JEL Classification: Q510
1. Introduction
The economic valuation of natural resources using stated preference (SP) information
has come to be known as contingent valuation (CV), given that the value estimates
obtained are contingent on the information previously provided to the respondent in
the survey. CV is deeply rooted in welfare economics: to be precise, in the neoclassical
concept of economic value under the framework of individual utility maximisation.
CV surveys are capable of directly obtaining a monetary (Hicksian) measure of
welfare associated with a discrete change in the provision of an environmental good,
by substituting one good for another or the marginal substitution of different attributes
of an existing good. There are some other terms that have been used for the value
estimates derived from stated preference information, depending on the elicitation
format used: discrete choice experiment, bidding game, open-ended question, choicebased conjoint analysis, contingent ranking, single- or double-bounded dichotomous
choice, paired comparisons, payment card, etc.
The history of the contingent valuation method (CVM) can be broadly divided
into three periods. In the first period (1943-1989), covering the origins of the method
up to the Exxon Valdez accident, the CVM conforms as an alternative to revealed
preference methods, such as the travel cost method (TCM), especially in the field of
*
David Hoyos and Petr Mariel, Department of Applied Economics III, University of the Basque
Country, Lehendakari Agirre 83, 48015 Bilbao, Spain (david.hoyos@ehu.es; petr.mariel@ehu.
es). The authors are grateful to the Department of Education of the Basque Government for grants
IT-334-07 (UPV/EHU Econometrics Research Group) and SEJ2007-61362 (from the Spanish
Ministerio de Educación and FEDER).
PRAGUE ECONOMIC PAPERS, 4, 2010
329
DOI: 10.18267/j.pep.380
outdoor recreation. In the second period (1989-1992), the extensive debate following
the Exxon Valdez oil spill stimulated further research on the theory and empirics of
stated preferences for non-market valuation techniques. Finally, from 1992 onwards,
the CVM has been consolidated as a non-market valuation method, being accepted at
both an academic and a political level.
Back in the 1940s, Bowen (1943) and Ciriacy-Wantrup (1947) were the first to
propose the use of a public opinion survey as a valid instrument to value public goods,
based on the idea that voting could be the closest substitute for consumer choice. In
what is often considered the first book on environmental and resource economics,
Resource Conservation: Economics and Policy, Ciriacy-Wantrup (1952) defends the
use of “direct interview methods”. However, influential economists, such as Samuelson
and Friedman, mistrusted survey responses on the grounds of strategic behaviour and
non-rationality of responses (Carson and Hanemann, 2005).
Outdoor recreation was the main force behind early empirical developments of the
CVM in this period. During the 1950s and 1960s, managers of the U.S. National Park
and U.S. Forest Services required information on people’s preferences and willingness
to pay (WTP) for these public services. Similarly, the U.S. agencies building water
projects at that time were interested in recreational benefits in order to make these
projects more attractive under the cost–benefit analysis framework. The first economist
to implement a CV survey was Davis (1963a, 1963b), in his research on the economic
value of recreation in the Maine woods. The author argues that real market behaviour
could be simulated in a survey by describing alternative facilities available to the public
and then eliciting the highest possible bid. A few years later, CVM and TCM estimates
were for the first time successfully tested for convergent validity (Knetsch and Davis,
1966). Meanwhile, surveys were used to elicit information about preferences for
public goods in the fields of health and transport economics by Michael Jones-Lee and
Jordan Louviere, respectively. Also, the range of applications spanned different types
of environmental goods, such as recreation, air quality, congestion, waste management
and others.
Some developments regarding the CVM’s ability to measure non-use values helped
to give the methodology substantial advances in the late 1960s. In fact, it was shown
that WTP estimates could include potentially important non-use values: option values
(Weisbrod, 1964), existence values (Krutilla, 1967) and quasi-option values (Arrow
and Fisher, 1974). Existence or passive-use values were found to be significant in
environmental valuation since many respondents were showing positive WTP values
for environmental quality changes that were not reflected in any observable behaviour.
In Krutilla (1967), considered as one of the most influential papers in environmental
economics, the author highlights the importance of irreversibility in environmental
decision making and discusses the possibility that non-use values constitute a main
component of the total economic value of an environmental good (Portney, 1994). Of
course, not including these values would give wrong signals to policymakers and the
only methodology available to capture them was CVM. Furthermore, the theory of
existence value and quasi-option value had a strong influence on the management of
unique and threatened natural resources (Aldy and Krupnick, 2009).
From a methodological point of view, it is important to highlight the paper published
by Bishop and Heberlein (1979), which was the first to incorporate the dichotomous
330
PRAGUE ECONOMIC PAPERS, 4, 2010
DOI: 10.18267/j.pep.380
format in CV surveys. The dichotomous format (also known as referendum or closedended) gained considerable acceptance because of its incentive compatibility (i.e. it
induces respondents to reveal their true preferences) and its substantial simplification
of the cognitive task faced by respondents. The theoretical formulation of the CVM
corresponds to Hanemann (1984), Cameron and James (1987) and Cameron (1988).
The approaches are dual to each other: while the former formulates the problem
through two indirect utility functions, known as the difference in indirect utility
functions model, the latter interprets the response to a CV survey as a comparison
between the bid amount and the respondent’s true underlying value, known as the
variation function model (McConnell, 1990).
The methodology was further refined and gained considerable political acceptance
in the United States in the 1970s and 1980s given that it was accepted as an economic
valuation tool by many federal institutions. Two US federal laws were approved
in these years: the Comprehensive Environmental Response, Compensation and
Liability Act of 1980, with the purpose of identifying potentially threatened sites
and funding their recovery, and its regulatory development of 1986 allowing for the
recovery of lost passive-use values and the use of CVM (Portney, 1994). CV was also
increasingly applied in Europe. All the previous developments were brought together
in an Environmental Protection Agency (EPA) conference in 1984 and they were
incorporated in the influential book by Mitchell and Carson (1989). The book included
a coherent theoretical framework and put forward the state of the art of the CVM:
design issues, elicitation formats, potential biases, etc.
That same year, on 24 March 1989, the Exxon Valdez oil spill occurred in Alaska
and, based on the recently developed legislation, the State of Alaska sued the company
for the loss of passive-use values. In the words of Carson et al. (2003), “the Exxon
Valdez represented the quintessential case in which, to ignore passive use values, was
to effectively say that resources that the public had chosen to set aside and not develop
could be harmed at little or no cost to the responsible party”. On the other hand, the oil
industry started a campaign aiming at questioning the reliability of the CVM. Critiques
against the CVM were exposed in a conference held in Washington D.C. in March
1992 and sponsored by the Exxon Company (Hausman, 1993). Opponents of the CVM
claimed that the reliance on CV surveys in either damage assessments or government
decision making was misguided (Diamond and Hausman, 1994). Later that year,
the National Oceanic and Atmospheric Administration (NOAA) Blue Ribbon Panel
co-chaired by the Nobel Prize awarded Arrow and Solow revised all the theoretical
and empirical works on CVM to conclude that: “CV studies can produce estimates
reliable enough to be the starting point for a judicial or administrative determination
of natural resource damages including lost passive-use value”. In addition, the panel
established a set of guidelines for CV studies to follow in order to gain reliability. These
guidelines were extremely influential in the posterior development of the methodology.
The sometimes heated but always healthy debate that occurred in this historical period
actually helped enrich the methodology by forcing academics to look deeper into the
underlying economic theory and other applied issues. Finally, the acceptance that the
CVM has nowadays at both a political and an academic level has boosted the number
of applications that are reported every year.
PRAGUE ECONOMIC PAPERS, 4, 2010
331
DOI: 10.18267/j.pep.380
In order to analyse the state of the art of the CVM, the following section will deal
with the theoretical foundations of CV. Issues on survey design will be dealt with in
Section 3; the econometrics of CV will be analysed in Section 4; and, finally, Section
5 will provide some concluding remarks. A comprehensive overview of this valuation
method can be found in Alberini and Kahn (2006), Carson and Hanemann (2005) and
Haab and McConnell (2002).
2. Economic Theory of Contingent Valuation
From a welfare economics perspective, public intervention may be justified under
the notion of a potential Pareto improvement: that is, if the overall benefits of public
intervention exceed its costs. In this context, public intervention may guarantee greater
efficiency in resource allocation. However, the sum of social benefits requires, on the
one hand, the estimation of an individual’s benefits and, on the other hand, aggregating
these benefits to the relevant population. The precise measure sought in the process of
estimating an individual’s benefits is the net change in income that relates to a change in
the quality or quantity of a non-market good. That is, precisely, the linkage between the
survey instrument and economic theory, because the CV survey provides information
to trace the WTP distribution for a proposed change in an environmental good. CV
combines economic theory associated with the structure of the utility function and
econometric theory associated with the way that disturbances enter into the process.
In fact, the structure of the utility function will be affected by the assumptions made
about the distribution of the error component.
CVM obtains an individual’s WTP for or willingness to accept (WTA) the change
in environmental quality through the survey instrument. The utility theoretical model
explained in the introduction to this research provides the basic framework for
interpreting the responses to a CV study. Given that these responses are usually treated
as random variables, the economic model needs to incorporate a stochastic component
and the WTP distributions need to be linked to the survey response probability under
the assumption that an individual maximises her utility (Carson and Hanemann, 2005).
The cumulative distribution function of WTP, GC , and the corresponding probability
density function, g C , depend on the form of the survey question. In the case of an
open-ended question format, where individuals are asked to state their maximum WTP
directly, A, the probability that the individual’s WTP is equal to A, is:
Pr(WTP A) g C ( A) .
In the case of a closed-ended question format, where individuals are asked whether
they would pay a certain amount of money, A, the probability that their WTP is equal
to or greater than this amount is:
Pr(WTP A) 1 GC ( A) .
In order to obtain a WTP distribution two approaches have emerged. Early
literature based on the open-ended questions format assumes a linear regression on
some covariates ( Z ) and a normally distributed random term ( ), so that WTP is
also normally distributed:
332
PRAGUE ECONOMIC PAPERS, 4, 2010
DOI: 10.18267/j.pep.380
WTP WTP Z .
The second approach incorporates a random term directly into the utility function,
in what has been known as random utility models (RUM) (Hanemann, 1984). In the
RUM framework, the individual knows with certainty his utility function (this implies
that he knows his WTP) but, given that these preferences are not entirely observable to
the researcher, they are treated as a random variable, so that the error term is directly
included in the (indirect) utility function. Following the closed-ended single-bounded
CV question format, the probability that the respondent answers “yes” can be written
as:
Pr( yes) Pr{WTP (q 0 , q 1 , p, y; ) A}
Pr{v(q 1 , p, y A; ) v(q 0 , p, y; )}
)} 1 GC ( A) .
where q0 and q1 are scalars for the item being valued at the initial (0) and final (1)
situations, p is the vector of the prices of the market commodities, y is the person’s
income and A is the amount of money being offered in the valuation question.
2
Var[WTP(q 0 , q1 , p, y; )])] and let G(·) be the
] , WTP
Let WTP E[WTP(q 0 , q1 , p, y; ))]
cumulative distribution function of the standardised variate (WTP WTP ) / WTP ; the
probability function can be rewritten as:
A WTP
Pr( yes) 1 G
WTP
1 G ( A) H ( A) ,
where = μWTP /σWTP and = 1/σWTP . This expression, where the response to a closedended CV survey is a function of a monetary amount, is consistent with an economic
model of maximising behaviour “if and only if it can be interpreted as the survivor
function of an economic WTP distribution” (Carson and Hanemann, 2005). The
probability model can be parametric or non-parametric as long as the relation between
the bid amount and the probability of responding “yes” is non-increasing. The graph
showing the response probability function can be considered a demand curve for the
change in the environmental good.
In the parametric approach to the specification of a response probability model,
the probability of a “yes” response is a known function of the bid amount, while in the
non-parametric approach it is treated as an unknown function. Two further differences
can be made between the two approaches. Firstly, the non-parametric approach treats
the bid levels as separate experiments. Secondly, the non-parametric estimation is
capable of obtaining a probability distribution for some points, but in order to obtain
welfare measures these points need to be connected. Different ways of connecting
these points have been proposed in the literature: linear interpolation (Kriström, 1990),
Kaplan–Meier–Turnbull estimation (Carson et al., 2003) or smoothing (Copas, 1983).
PRAGUE ECONOMIC PAPERS, 4, 2010
333
DOI: 10.18267/j.pep.380
3. Designing and Administering the Survey Instrument
3.1 Survey Design
The design of the questionnaire is a key issue in CVM given that, as mentioned before,
the obtained values are contingent on the information provided. The information
provided in the questionnaire should be, on the one hand, consistent with scientific and
expert knowledge and, on the other, comprehensible to an average citizen who probably
knows little or nothing about the good under valuation. This apparently simple task is
complicated because the analyst needs to be trained in survey design, something that
economists are not usually trained in. Interested readers may find further details on CV
survey design in Mitchell and Carson (1989), Louviere et al. (2000) and Bradburn et
al. (2004). In the following, some key features of survey design will be given.
Producing a high quality CV study requires a substantial part of the work to be
dedicated to designing the questionnaire. Previous work with scientists and experts,
focus groups and in-depth interviews with potential respondents are essential in order
to provide a plausible and understandable description of the good under valuation and
its context. Feedback from these agents should be used iteratively in the revision of
the questionnaire. The development and testing of CV surveys, as with all primary
data-collection methods, requires iterative face-to-face pilot testing. Much effort
should be devoted to translating expert knowledge into understandable and valuable
information for respondents. Face validity is also a desired property of a well-designed
survey. It basically means that the information provided in the survey instrument
should be clear, accurate and sufficient in order to make a decision and the proposed
trade-off plausible.
The current state of practice of CV survey design usually structures the questionnaire
in six sections (Carson, 2000):
1. The first section is devoted to introducing the survey purpose, the context for
making a decision.
2. The second section provides a clear and detailed description of the good to be
valued. This section usually also collects some previous knowledge and attitudes
towards the good from the respondents.
3. The third section presents the CV scenario including the current or baseline
situation (status quo) and possible future states of the natural resource in the case
of no implementation of the proposed policy, including the institutional context in
which the good will be provided and the payment vehicle.
4. The fourth section or elicitation section asks for the respondents’ maximum WTP
to obtain the environmental good or the minimum WTA for giving it up.
5. The fifth section analyses the respondents’ understanding and certainty of the
answers provided.
6. The last section is devoted to some debriefing questions on the socio-demographic
characteristics of the respondents.
334
PRAGUE ECONOMIC PAPERS, 4, 2010
DOI: 10.18267/j.pep.380
So, in the first place, the researcher needs to evaluate the amount of information
needed in order to construct a sufficiently informative and credible questionnaire. This
may be especially difficult in cases where prior knowledge about the good in question
varies substantially among the relevant population.
Secondly, the core of the CV study is the valuation scenario presented in the
questionnaire. The valuation scenario should give clear information on the change
to be valued, how it would come about, who would pay for it and how, and other
information relevant to considering the change. In developing the survey instrument,
the baseline or status quo situation and the outcomes of the proposed policy should
be carefully analysed. The elicitation part of the questionnaire provides the researcher
with information to estimate the preferences of the individuals.
Several elicitation methods have been proposed (see Table 1). In the open-ended
format, respondents are directly asked to state their maximum WTP: “how much would
you be willing to pay for this item?” Alternatively, individuals may be presented with
a discrete choice question attempting to identify if their true value is lower or higher
than a given bid. The simplest form of a discrete choice question is a take-it-or-leave-it
offer, which can also be framed as a referendum question. In the dichotomous or
closed-ended format, respondents are asked for a yes–no answer to the WTP question:
“would you be willing to pay €A for this item?” In the referendum question format,
respondents are usually informed about an environmental programme that would be
implemented only if 50 % of the population favours it, in which case all the members of
society should pay. The simplicity of the closed-ended question format contrasts with
the fact that the estimation of welfare measures is more complicated, the statistical
efficiency is lower and the results are sensitive to the model specification (Niklitschek
and León, 1996). The elicitation format was extended in the early 1990s to a sequence
of paired comparisons or to a single multinomial comparison, also known as a choice
experiment. (For a review, interested readers may refer to Hoyos, 2010.)
Efficiency in the elicitation of WTP can be increased if repeated questions are
used. In the bidding game format, respondents are iteratively asked to state their
maximum WTP: “would you be willing to pay €A for this item?”. If the answer
is positive, a new question with a higher value for A is asked, and if the answer is
negative, a new question with a lower value for A is asked. The bidding game ends
when the respondent switches from “yes” to “no” or from “no” to “yes”. As opposed to
the previous single-bounded format, a double-bounded format has also been proposed
in order to overcome the econometric precision that closed-ended questions lose
compared with open-ended questions. A variant of the double-bounded approach is
the spike model in which, prior to the elicitation question, individuals are asked if they
would pay anything (Hanemann and Kanninen, 1999). In order to overcome some
problems arising in the double-bounded approach, the one-and-one-half-bounded
approach has also been proposed (Cooper et al., 2002). In this case, the second bid is
only presented if it is consistent with the respondent’s previous answer. Finally, in the
payment cards format, respondents face a card with a list of bids (either point estimates
or interval ranges) and choose their maximum WTP.
PRAGUE ECONOMIC PAPERS, 4, 2010
335
DOI: 10.18267/j.pep.380
Table 1
Typology of Elicitation Methods
Actual WTP obtained
Discrete interval of WTP obtained
Single
question
Open-ended/direct questions
Payment cards
Sealed-bid auction
Take-it-or-leave-it offer
Spending question offer
Interval checklist
Iterative
questions
Bidding game
Oral auction
Take-it-or-leave-it offer (with a follow-up
question)
Source: Mitchell and Carson (1989).
Thirdly, the payment vehicle needs to be credible, coercive and incentive compatible.
Another related decision is whether to propose a one-time payment or a recurrent payment.
In practice, a one-time payment should be used when the valuation exercise involves
a capital investment, while recurrent payment should be used when the provision of the
good requires recurrent maintenance.
Fourthly, the analyst should address the bid vector design. Pre-test and pilot studies with
open-ended questions can help provide some information on the bounds of respondents’
WTP. The bid vector containing four to six levels of the monetary payment is considered
reasonably efficient (Carson and Hanemann, 2005).
Finally, the analyst should consider the reliability of the responses and try to minimise
the divergence between the stated survey scenario and the respondent’s view of this
scenario. Gathering information on the motives behind the responses may also help explain
the differences encountered in WTP responses. Another related issue is the certainty of
respondents regarding the response provided. It is important that the respondent feels that
his answer will have policy implications so that he feels comfortable favouring or opposing
the proposed policy. For this purpose, questionnaires usually include some reasons to
favour or oppose the policy before the valuation question, and at the end of the survey the
possibility to revise her answer is sometimes included.
3.2 Survey Administration
Another important issue in designing a survey refers to the way it is administered.
Some important issues related to the survey administration include the definition of
the relevant population, the survey mode, the sampling approach and the sample size.
The population of interest is the potential buyers or users of a public good under the
circumstances exposed in the CV survey. This is not an easy task so, in practice, two
approaches are used: a legal/political perspective, where the relevant population is
confirmed by the jurisdiction of the institution financing the CV study, or a cost–benefit
approach, where the relevant population is defined in terms of the costs and benefits of
sampling further away from the resource in question.
The public survey may be administered in three ways: mail, telephone or in person.
New technologies have also progressively been incorporated in the form of Internet
surveys or computer-aided interviews. Obviously, in order to administrate a survey the
first thing that the researcher needs to have in mind is the cost of sampling but other
336
PRAGUE ECONOMIC PAPERS, 4, 2010
DOI: 10.18267/j.pep.380
issues are also important, namely, the capacity of the survey administration mode to
account for the relevant population, the different ways in which the stimuli will be
presented and the degree of control that the researcher holds over these stimuli. Mail
and telephone surveys are cheaper but they have two limitations: visual stimuli cannot
be presented and sample selection bias may appear. In-person interviews, on the other
hand, are more expensive but they are more flexible and reliable. Finally, computeraided interviews may be helpful for giving visual stimuli or when experimental designs
with a high number of choice sets are used.
Sampling approaches may also differ depending on the particular good under
valuation. A random sample of the relevant population is best for deriving appropriate
inference, but this is not always possible (for example, on-site recreation demand
surveys will be affected by sample selection problems). In any case, stratification of the
sample increases its efficiency. This stratification strategy is usually based on political
boundaries, age, gender, etc. Stratification is usually accompanied by some sort of
clustering so that some locations where the survey will be conducted are previously
selected in order to reduce the interviewer’s travel time and costs. Finally, the sample
size should be decided taking into account the level of precision aimed to be achieved
and testing the hypotheses of interest.
4. Econometrics of Contingent Valuation
In this section, we will focus on modelling the most common elicitation method, the
single-bounded closed-ended format questions. More details on the econometrics of
CV can be found in Haab and McConnell (2002).
The RUM developed by Hanemann (1984) provides the basic framework
for analysing closed-ended single-bounded responses. This approach proceeds
by specifying an indirect utility function and a particular distribution for the error
component. The indirect utility function for respondent j in state i of the change to be
valued (i = 0 being the status quo and i = 1 the final state) can be written as:
uij = ui(yj , zj , ij),
where yj denotes the respondent’s income level, zj is an m-dimensional vector of
the individual’s characteristics including questionnaire variations and ij is the error
component. In this case, utility is assumed to arise from income and the presence or
absence of the environmental change. It is referred to as the indirect utility function
since utility is a function of income and not goods (it is sometimes known as the
conditional indirect utility function since utility is conditional on the choice made).
Based on this model, the probability of observing a positive response to a specified
amount tj would be:
Pr( yes j ) Pr{u1 ( y j t j , z j , 1 j ) u 0 ( y j , z j , 0 j )}
)} .
Parametric estimation of the probability statement above requires two modelling
decisions: the functional form of utility and the distribution of the error term. The
utility function is generally specified as additively separable in deterministic and
stochastic preferences, that is:
PRAGUE ECONOMIC PAPERS, 4, 2010
337
DOI: 10.18267/j.pep.380
uij = ui(yj , zj , ij) = vi(yj , zj ) +
ij
.
Consequently, the probability of a “yes” response for respondent j becomes:
Pr( yes j ) Pr{v1 ( y j t j , z j ) 1 j v0 ( y j , z j ) ojoj } .
So, in order to understand the decision to answer positively, the utility difference
between the “yes” and “no” states needs to be examined. In other words, the probability
of a certain response is examined as a function of the differences in the utilities at the
base and final states. Given that the random term can be rewritten as j 1 j 0 j , the
probability of a positive response is:
Pr( yes j ) 1 F [(v1 ( y j t j , z j ) v0 ( y j , z j ))] ,
where F (a) is the probability that the random variable is less than a. In the linear
utility function specification the deterministic part of a respondent’s preferences is
linear both in covariates and income:
vij = i z j i y j ,
where i denotes an m-dimensional vector of parameters, zj is an m-dimensional
vector of characteristics of the individual including the characteristics of the given CV
scenario, y j is the respondent’s discretionary income and i is the marginal utility of
income. Denoting by t j the bid vector for the CV scenario, the deterministic utility for
the initial and final state is:
voj (yj) = α0 zj + 0 yj
v1j (yj) = α1 zj +
1
(yj –tj)
and assuming that the marginal utility of income is constant in the quality change (i.e.
0 1 ), the change in deterministic utility for respondent j can be written as:
v1 j v0 j (1 0 ) z j 1 ( y j t j ) 0 y j z j t j ,
and the probability of a “yes” response becomes:
Pr( yes j ) Pr(z j t j j 0) .
As mentioned before, parametric estimation of the parameters of the change in the
utility requires some assumptions about the nature of the random term. The general
assumption that j are independently and identically (IID) distributed with mean
zero facilitates the wide use of two symmetric distributions: the normal and logistic
distributions. In the former, when the error term is thought to be a standard normal
random variable, the response function becomes a probit model; in the latter, when the
error term is thought to be a logistic random variable, the response function becomes
a logit model.
The advantage of the logit model is that it has a closed-form solution, which
facilitates its calculation. However, recent computational developments have minimised
this difference. From a statistical point of view, both distributions are symmetric,
although the logit has thicker tails. It is important to note a fundamental characteristic
338
PRAGUE ECONOMIC PAPERS, 4, 2010
DOI: 10.18267/j.pep.380
of dichotomous dependent variables: the estimated parameters will always be divided
by an unknown variance.
Once the response model to the CV responses is built, a measure of welfare (i.e.
people’s WTP for the change to be valued) may be estimated. Given that the fitted
response model was derived from an underlying WTP distribution, GC , the underlying
WTP distribution can be recovered from the fitted response model. However,
calculating WTP with linear random utility models requires two sources of uncertainty
(parameters and preferences) to be taken into account as well as the variability induced
by the covariates included in the model.
In dealing with these sources of uncertainty, it is usually assumed that the parameters
are given and measures of central tendency over the distribution of preferences are
pursued, mainly mean and median WTP. The expression for the expectation of WTP
with respect to preference uncertainty is:
z j
.
E (WTPj | , , z j )
The median of the distribution of WTP with respect to preference uncertainty is
obtained by solving the expression that the probability of final utility greater than initial
utility is 0.5:
z j
.
Md (WTPj | , , z j )
So, in the case of linear utility functions and symmetric distributions or error terms, the
mean and median WTP are equal. It is important to note that the subscript j for the previous
expressions suggests that each respondent has an expected or median WTP with respect to
preference uncertainty.
Linear utility models are the most common specification in empirical applications. The
only problem with linear utility models is that the marginal utility of income is assumed to be
constant across the CV scenarios. In practice, linear models are justified on the grounds that
the payment in CV studies usually consists of a very small share of income. Nevertheless,
other models allowing differences in income have been proposed: the random utility model
log linear in income, the random utility model with Box–Cox transformation in income,
etc. Other parametric and non-parametric models for CV are analysed more deeply in Haab
and McConnell (2002).
Some further econometric issues are relevant to the processing of CV data. The first
issue refers to the structure of the WTP distributions, given that this is the main output of
a CV survey. No matter whether a parametric or a non-parametric approach is adopted,
some model specification decisions need to be taken: (1) whether negative WTP values
are allowed, (2) the potential existence of corner solutions (spike at zero WTP), (3) how to
ensure weak monotonicity of the WTP distribution to increases in the monetary amount,
(4) the smoothness of the WTP distribution as it departs from zero and (5) how to deal with
the right-hand tail of the WTP distribution (Carson and Hanemann, 2005).
A second issue concerns the bid design, that is, the set of bids that will be randomly
assigned to respondents. The optimal bid design should have as many design points
placed along the cost space as model parameters. Usually, these points are placed so as
to maximise the determinant of the information matrix in what is known as a D-optimal
criterion (Cooper, 1993). Alternatively, given that our welfare estimate of interest is the
PRAGUE ECONOMIC PAPERS, 4, 2010
339
DOI: 10.18267/j.pep.380
ratio of two parameters, C-optimal designs have also been proposed with the objective of
minimising the confidence interval for the mean WTP (Alberini, 1995). Other proposed
designs include Bayesian designs, minimax designs and sequential designs (Vasquez et
al., 2007). In practice, the model parameters are unknown so the bid vector used in a CV
survey is generally obtained from pre-test and pilot studies. Four to six bid points produces
reasonably efficient and robust estimates (Carson and Hanemann, 2005).
Another frequent empirical problem relates to the treatment of don’t knows and
protest answers. It is clear that the uncertainty of the respondent about the answer to a CV
study provides low quality data and that is the main reason why the Blue Ribbon Panel
recommended including a “don’t know” option in addition to the typical referendum question
(Arrow et al., 1993). However, recent literature on this issue has shown that allowing for
a “don’t know” option does not significantly affect the quality of the survey responses but
it reduces substantially the amount of information collected. In any case, these answers
need to be econometrically treated and three options are available: exclude these answers
from the data set, recode them as “no” responses or impute them using some specific
model. Dropping “don’t know” answers is equivalent to allocating them proportionally
into positive/negative responses. Nevertheless, Carson et al. (1998) analyse in depth the
issue on voting uncertainty, concluding that these answers tend to be “no” responses and
so they would be better treated as negative answers. Protest zeros, on the other hand, are
those answering “no” but a follow-up question on the reasons for this answer suggests that
they might have some positive WTP. Typically protest answers are those claiming that the
government or the one causing the damage should pay for its recovery. These answers are
usually dropped from the data but some authors argue that given the context-specific nature
of the CV survey these respondents would more probably vote against the proposed policy
and they are better treated as “no” responses (Carson and Hanemann, 2005). It is important
to note that recoding “don’t know” answers and protest zeros as “no” responses provides
more conservative estimates of WTP, which might be interesting from a policy perspective.
Sample selection models have also been proposed in order to take into account both zero
values and protest answers in the model estimates (Strazzera et al., 2003).
Uncertainty may also be specifically treated in analysing responses to CV surveys. The
identification and treatment of respondent uncertainty is also a new trend in environmental
valuation. Carson and Hanemann (2005) encounter some problems in the introduction of an
extra source of uncertainty into CV statistical models: firstly, there is no room for adding an
extra source of uncertainty in the RUM model without adding more structure to the model;
secondly, identifying the extra source of uncertainty by modifying the utility function
is complicated because parametric models already allow for some form of deviation;
and thirdly, identifying the extra source of uncertainty by including a typically Likert
psychometric scaled question is potentially difficult. Allowing for heterogeneity in the
error component may be another way of approaching these phenomena. Finally, parametric
and non-parametric approaches are used to test for differences in WTP distributions due
to scope. More recently, convolutions and bootstrap approaches have been applied (Poe et
al., 2005).
Validity and reliability have been the centre of the debate between proponents and
opponents of the CVM. Validity refers to the extent to which what one wishes to measure
corresponds to what was actually measured, i.e. a measure of accuracy. Reliability, on the
other hand, refers to the replicability of the obtained results. Testing validity is difficult
340
PRAGUE ECONOMIC PAPERS, 4, 2010
DOI: 10.18267/j.pep.380
because, given that the maximum WTP is inherently unobservable, there is no correct
measure for results to be compared with. As a consequence, validity is usually determined
through construct validity (sometimes referred to as internal validity) and convergence
validity (sometimes referred to as external validity). Construct validity tests compare
the consistency of the measurement made with factors, such as economic theory, while
convergence validity tests compare the measurement made with the results of a different
valuation technique. Reliability is usually tested in two ways: the stability of WTP measures
over time and test–retest reliability, where a sample of respondents is interviewed twice
with the same survey instrument.
5. Conclusions
The literature on CV since the early 1960s is vast. Carson (2007) collects over six
thousand papers from one hundred countries in fifty years of CV history. Such a large
amount of applications allows us to extract two conclusions about the state of the
art of the method: on the one hand, despite some interesting debates in the 1990s,
CV is generally accepted at both an academic and a political level; and, on the other,
difficulties in discriminating the nature and the quality of the survey instrument make
any general valid statement about the properties of CVM impossible. In any case, the
controversies surrounding the use of CVM have actually promoted “the most serious
investigation of individual preferences ever undertaken in economics” (Carson et al.,
2001).
Almost fifty years of CVM have provided a strong theoretical and empirical basis,
although there is room for refinement in the coming years. Many of the critics have been
proved to be erroneous while others, being correct, are not CV specific but rather inherent
problems of the neoclassical framework. The theoretical inconsistency of empirical CV
results have mainly been due to incorrectly transferring the theoretical framework for price
changes to the imposed quantity changes that characterise most environmental goods and
services. However, as Carson and Hanemann (2005) point out, welfare estimation with
stated preference methods requires a correct understanding of the differences between
private and public goods:
Public goods are a special case of quantity rationed goods and, as a result, the focus should be on
a quantity space with an inverse demand system rather than price space with an ordinary demand
system where consumers are free to choose their optimal consumption levels.
The main challenge posed to economists by the widespread use of CVM refers to the
correct design of the questionnaire, given the fact that they are not usually trained in this
matter. A reliable CV survey is complicated to design and expensive to administrate but,
as Carson et al. (2001) argue, many of the alleged problems with CVM may be solved by
careful design and administration. CV surveys should be properly and carefully conducted
and they should include internal consistency tests so that the validity and reliability of the
results can be further tested.
The main advantage posed by the use of CVM in environmental economics has been
its ability to measure the benefits of environmental changes in a large amount of situations.
It can also aid at public decision making by better understanding the good under valuation
and individuals’ preferences for this good. Acknowledging the imprecision of obtained
welfare measures, it may still be more dangerous to leave public decision making in the
PRAGUE ECONOMIC PAPERS, 4, 2010
341
DOI: 10.18267/j.pep.380
hands of politicians or “experts”. While experts may determine the physical damages or
the costs of restoration of a natural resource, only the public can assess what this change
is worth. In the words of Hanemann (1994): “when the public valuation is the object of
measurement, a well-designed CV survey is one way of consulting the relevant experts, the
public itself”.
References
Alberini, A. (1995), “Optimal Designs for Discrete Choice Contingent Valuation Surveys: Single-Bound,
Double-Bound, and Bivariate Models.” Journal of Environmental Economics and Management, 28,
pp. 287-306.
Alberini, A., Kahn, J. (2006), Handbook on Contingent Valuation. Cheltenham, UK: Edward Elgar.
Aldy, J., Krupnick, A. (2009), “Introduction to the Frontiers of Environmental and Resource Economics.”
Journal of Environmental Economics and Management, 57, pp. 1-4.
Arrow, K., Fisher, A. C. (1974), “Environmental Preservation, Uncertainty and Irreversibility.” Quarterly
Journal of Economics, 88, pp. 313-319.
Arrow, K. et al. (1993), “Report of NOAA Panel on Contingent Valuation.” Federal Register, 58, pp.
4601-4614.
Bishop, R., Heberlein, T. (1979), “Measuring Values of Extramarket Goods: Are Direct Measures
Biased?” American Journal of Agricultural Economics, 61, pp. 926-930.
Bowen, H. R. (1943), “The Interpretation of Voting in the Allocation of Economic Resources.” Quarterly
Journal of Economics, 58, pp. 27-48.
Bradburn, N., Sudman, S., Wansink, B. (2004), Asking Questions: The Definitive Guide to
Questionnaire Design. San Francisco: Jossey-Bass.
Cameron, T. A. (1988), “A New Paradigm for Valuing Non-Market Goods Using Referendum Data:
Maximum Likelihood Estimation by Censored Logistic Regression.” Journal of Environmental
Economics and Management, 15, pp. 355-379.
Cameron, T. A., James, M. D. (1987), “Efficient Estimation Methods for ‘Close-Ended’ Contingent
Valuation Surveys.” The Review of Economics and Statistics, 69, pp. 269-276.
Carson, R. T. (2000), “Contingent Valuation: a User’s Guide.” Environmental Science & Technology,
34, pp. 1413-1418.
Carson, R. T. (2007), Contingent Valuation: A Comprehensive Bibliography and History. Cheltenham:
Edward Elgar.
Carson, R. T., Hanemann, W. M. (2005), “Contingent Valuation,” in Mäler, K. G., Vincent, J. R., ed.,
Handbook of Environmental Economics. Valuing Environmental changes. Vol. 2. Amsterdam:
Elsevier, pp. 821-936.
Carson, R. T., Flores, N., Meade, N. F. (2001), “Contingent Valuation: Controversies and Evidence.”
Environmental and Resource Economics, 19, pp. 173-210.
Carson, R. T. et al. (1998), “Referendum Design and Contingent Valuation: the NOAA Panel’s No Vote
Recommendation.” Review of Economics and Statistics, 80, pp. 484-487.
Carson, R. T. et al. (2003), “Contingent Valuation and Lost Passive Use: Damages from the Exxon
Valdez Oil Spill.” Environmental and Resource Economics, 25, pp. 257-286.
Ciriacy-Wantrup, S. V. (1947), “Capital Returns from Soil-Conservation Practices.” Journal of Farm
Economics, 29, pp. 1181-1196.
Ciriacy-Wantrup, S. V. (1952), Resource Conservation: Economics and Policy. Berkeley: University
of California Press.
342
PRAGUE ECONOMIC PAPERS, 4, 2010
DOI: 10.18267/j.pep.380
Cooper, J. C. (1993), “Optimal Bid Selection for Dichotomous Choice Contingent Valuation Surveys.”
Journal of Environmental Economics and Management, 24, pp. 25-40.
Cooper, J. C., Hanemann, W. M., Signorello, G. (2002), “One-And-One-Half-Bound Dichotomous
Choice Contingent Valuation.” The Review of Economics and Statistics, 84, pp. 742-750.
Copas, J. B. (1983), “Plotting p against x.” Applied Statistics, 32, pp. 25-31.
Davis, R. K. (1963a), “Recreation Planning as an Economic Problem.” Natural Resources Journal, 3,
pp. 239-249.
Davis, R. K. (1963b), “The Value of Outdoor Recreation: An Economic Study of the Maine Woods.”
Dissertation, Harvard University.
Diamond, P. A., Hausman, J. A. (1994), “Contingent Valuation: Is Some Number Better than No
Number?” The Journal of Economic Perspectives, 8, pp. 45-64.
Haab, T. C., McConnell, K. E. (2002), Valuing Environmental and Natural Resources. The Econometrics
of Non-Market Valuation. Cheltenham, UK: Edward Elgar Publishing Limited.
Hanemann, W. M. (1984), “Welfare Evaluations in Contingent Valuation Information with Discrete
Responses.” American Journal of Agricultural Economics, 66, pp. 332-341.
Hanemann, W. M. (1994), “Valuing the Environment through Contingent Valuation.” The Journal of
Economic Perspectives, 8, pp. 19-43.
Hanemann, W. M., Kanninen, B. J. (1999), “The Statistical Analysis of Discrete-Response CV Data,”
in Bateman, I. J., Willis, K. G., eds., Valuing Environmental Preferences. Oxford: Oxford University
Press, pp. 302-441.
Hausman, J. (1993), Contingent Valuation: A Critical Assessment. Amsterdam: North-Holland.
Hoyos, D. (2010), "The State of the Art of Environmental Valuation with Discrete Choice Experiments."
Ecological Economics, Vol. 69, pp. 2372-2381
Knetsch, J. L., Davis, R. K. (1966), “Comparisons of Methods for Resource Evaluation,” in Kneese,
A. V., Smith, S. C., eds., Water Research. Baltimore: John Hopkins University Press, pp. 125-142.
Kriström, B. (1990), “A Nonparametric Approach to the Estimation of Welfare Measures in Discrete
Response Valuation Studies.” Land Economics, 66, pp. 135-139.
Krutilla, J. V. (1967), “Conservation Reconsidered.” American Economic Review, 57, pp. 777-786.
Louviere, J., Hensher, D. A., Swait, J. (2000), Stated Choice Methods: Analysis and Applications.
Cambridge: Cambridge University Press.
McConnell, K. E. (1990), “Models for Referendum Data: the Structure of Discrete Choice Models for
Contingent Valuation.” Journal of Environmental Economics and Management, 18, pp. 19-34.
Mitchell, R. C., Carson, R. T. (1989), Using Surveys to Value Public Goods: The Contingent Valuation
Method. Washington, D.C.: RFF Press.
Niklitschek, M., León, J. (1996), “Combining Intended Demand and Yes/No Responses in the Estimation
of Contingent Valuation Models.” Journal of Environmental Economics and Management, 31, pp.
387-402.
Poe, G. L., Giraud, K. L., Loomis, J. B. (2005), “Computational Methods for Measuring the Difference
of Empirical Distributions.” American Journal of Agricultural Economics, 87, pp. 353-365.
Portney, P. R. (1994), “The Contingent Valuation Debate: Why Economists Should Care.” The Journal
of Economic Perspectives, 8, pp. 3-17.
Strazzera, E. et al. (2003), “Modelling Zero Values and Protest Responses in Contingent Valuation
Surveys.” Applied Economics, 35, pp. 133-138.
Vasquez, F., Cerda, A., Orrego, S. (2007), Valoración Económica del Ambiente. Buenos Aires:
Thomson.
Weisbrod, B. A. (1964), “Collective Consumption Services of Individual Consumption Goods.”
Quarterly Journal of Economics, 78, pp. 471-477.
PRAGUE ECONOMIC PAPERS, 4, 2010
343