This paper develops a new method for estimating a demand function and the welfare consequences of... more This paper develops a new method for estimating a demand function and the welfare consequences of price changes. The method is applied to gasoline demand in the U.S. and is applicable to other goods. The method uses shape restrictions derived from economic theory to improve the precision of a nonparametric estimate of the demand function. Using data from the U.S. National Household Travel Survey, we show that the restrictions are consistent with the data on gasoline demand and remove the anomalous behavior of a standard nonparametric estimator. Our approach provides new insights about the price responsiveness of gasoline demand and the way responses vary across the income distribution. We find that price responses vary nonmonotonically with income. In particular, we find that low-and high-income consumers are less responsive to changes in gasoline prices than are middle-income consumers. We find similar results using comparable data from Canada.
Berkson errors are commonplace in empirical microeconomics. In consumer demand this form of measu... more Berkson errors are commonplace in empirical microeconomics. In consumer demand this form of measurement error occurs when the price an individual pays is measured by the (weighted) average price paid by individuals in a specified group (e.g., a county), rather than the true transaction price. We show the importance of such measurement errors for the estimation of demand in a setting with nonseparable unobserved heterogeneity. We develop a consistent estimator using external information on the true distribution of prices. Examining the demand for gasoline in the U.S., we document substantial within-market price variability, and show that there are significant spatial differences in the magnitude of Berkson errors across regions of the U.S. Accounting for Berkson errors is found to be quantitatively important for estimating price effects and for welfare calculations. Imposing the Slutsky shape constraint greatly reduces the sensitivity to Berkson errors.
As a unified discipline, econometrics is still relatively young and has been transforming and exp... more As a unified discipline, econometrics is still relatively young and has been transforming and expanding very rapidly over the past few decades. Major advances have taken place in the analysis of cross sectional data by means of semi-parametric and non-parametric techniques. Heterogeneity of economic relations across individuals, firms and industries is increasingly acknowledged and attempts have been made to take them into account either by integrating out their effects or by modeling the sources of heterogeneity when suitable panel data exists. The counterfactual considerations that underlie policy analysis and treatment evaluation have been given a more satisfactory foundation. New time series econometric techniques have been developed and employed extensively in the areas of macroeconometrics and finance. Non-linear econometric techniques are used increasingly in the analysis of cross section and time series observations. Applications of Bayesian techniques to econometric problems have been given new impetus largely thanks to advances in computer power and computational techniques. The use of Bayesian techniques have in turn provided the investigators with a unifying framework where the tasks of forecasting, decision making, model evaluation and learning can be considered as parts of the same interactive and iterative process; thus paving the way for establishing the foundation of "real time econometrics". This paper attempts to provide an overview of some of these developments.
Monte Carlo experiments have shown that tests based on generalized-method-ofmoments estimators of... more Monte Carlo experiments have shown that tests based on generalized-method-ofmoments estimators often have true levels that differ greatly from their nominal levels when asymptotic critical values are used. This paper gives conditions under which the bootstrap provides asymptotic refinements to the critical values of t tests and the test of overidentifying restrictions. Particular attention is given to the case of dependent data. It is shown that with such data, the bootstrap must sample blocks of data and that the formulae for the bootstrap versions of test statistics differ from the formulae that apply with the original data. The results of Monte Carlo experiments on the numerical performance of the bootstrap show that it usually reduces the errors in level that occur when critical values based on first-order asymptotic theory are used. The bootstrap also provides an indication of the accuracy of critical values obtained from first-order asymptotic theory.
This paper develops a new method for estimating the demand function for gasoline and the deadweig... more This paper develops a new method for estimating the demand function for gasoline and the deadweight loss due to an increase in the gasoline tax. The method is also applicable to other goods. The method uses shape restrictions derived from economic theory to improve the precision of a nonparametric estimate of the demand function. Using data from the U.S. National Household Travel Survey, we show that the restrictions are consistent with the data on gasoline demand and remove the anomalous behavior of a standard nonparametric estimator. Our approach provides new insights about the price responsiveness of gasoline demand and the way responses vary across the income distribution. We reject constant elasticity models and find that price responses vary non-monotonically with income. In particular, we find that low-and high-income consumers are less responsive to changes in gasoline prices than are middle-income consumers.
In this paper we propose a novel method to construct confidence intervals in a class of linear in... more In this paper we propose a novel method to construct confidence intervals in a class of linear inverse problems. First, point estimators are obtained via a spectral cut-off method depending on a regularisation parameter α, that determines the bias of the estimator. Next, the proposed confidence interval corrects for this bias by explicitly estimating it based on a second regularisation parameter ρ, which is asymptotically smaller than α. The coverage error of the interval is shown to converge to zero. The proposed method is illustrated via two simulation studies, one in the context of func- tional linear regression, and the second one in the context of instrumental regression
This reprint differs from the original in pagination and typographic detail. 1 2 P. HALL AND J. H... more This reprint differs from the original in pagination and typographic detail. 1 2 P. HALL AND J. HOROWITZ a high degree of coverage accuracy or to produce bands that err on the side of conservatism. In this paper we suggest new, simple bootstrap methods for constructing confidence bands using conventional smoothing parameter choices. In particular, our approach does not require a nonstandard smoothing parameter. The basic algorithm requires only a single application of the bootstrap, although a more refined, double bootstrap technique is also suggested. The greater part of our attention is directed to regression problems, but we also discuss the application of our methods to constructing confidence bands for density functions. The resulting confidence regions depend on choice of two parameters α and ξ, in the range 0 < α, ξ < 1, and the methodology results in confidence bands that, asymptotically, cover the regression mean at x with probability at least 1 − α, for at least a proportion 1 − ξ of values of x. In particular, the bands are pointwise, rather than simultaneous. Pointwise bands are more popular with practitioners and are the subject of a substantial majority of research on nonparametric confidence bands for functions.
The proportional hazard model with unobserved heterogeneity gives the hazard function of a random... more The proportional hazard model with unobserved heterogeneity gives the hazard function of a random variable conditional on covariates and a second random variable representing unobserved heterogeneity. This paper shows how to estimate the baseline hazard function and the distribution of the unobserved heterogeneity nonparametrically. The baseline hazard function and heterogeneity distribution are assumed to satisfy smoothness conditions but are not assumed to belong to known, finite-dimensional, parametric families. Existing estimators assume that the baseline hazard function or heterogeneity distribution belongs to a known parametric family. Thus, the estimators presented here are more general than existing ones.
Testing exogeneity in nonparametric instrumental variables identified by conditional quantile res... more Testing exogeneity in nonparametric instrumental variables identified by conditional quantile restrictions cemmap working paper, No. CWP68/15
In recent years, major advances have taken place in three areas of random utility modeling: (1) s... more In recent years, major advances have taken place in three areas of random utility modeling: (1) semiparametric estimation, (2) computational methods for multinomial probit models, and (3) com-putational methods for Bayesian estimation. This paper summarizes these developments and dis-cusses their implications for practice. 1.
Many dependent variables of interest in economics and other social sciences can only take two val... more Many dependent variables of interest in economics and other social sciences can only take two values. The two possible outcomes are usually denoted by 0 and 1. Such variables are called dummy variables or dichotomous variables. Some examples: • The labor market status of a person. The variable takes the value 1 if a person is employed and 0 if he is unemployed. The values 1 and 0 can be assigned arbitrarily.
In this paper we propose a novel method to construct confidence intervals in a class of linear in... more In this paper we propose a novel method to construct confidence intervals in a class of linear inverse problems. First, point estimators are obtained via a spectral cutoff method depending on a regularisation parameter α, that determines the bias of the estimator. Next, the proposed confidence interval corrects for this bias by explicitly estimating it based on a second regularisation parameter ρ, which is asymptotically smaller than α. The coverage error of the interval is shown to converge to zero. The proposed method is illustrated via two simulation studies, one in the context of functional linear regression, and the second one in the context of instrumental regression.
A parameter of an econometric model is identified if there is a one-to-one or many-to-one mapping... more A parameter of an econometric model is identified if there is a one-to-one or many-to-one mapping from the population distribution of the available data to the parameter. Often, this mapping is obtained by inverting a mapping from the parameter to the population distribution. If the inverse mapping is discontinuous, then estimation of the parameter usually presents an ill-posed inverse problem. Such problems arise in many settings in economics and other fields in which the parameter of interest is a function. This article explains how ill-posedness arises and why it causes problems for estimation. The need to modify or regularize the identifying mapping is explained, and methods for regularization and estimation are discussed. Methods for forming confidence intervals and testing hypotheses are summarized. It is shown that a hypothesis test can be more precise in a certain sense than an estimator. An empirical example illustrates estimation in an ill-posed setting in economics.
Die Dokumente auf EconStor dürfen zu eigenen wissenschaftlichen Zwecken und zum Privatgebrauch ge... more Die Dokumente auf EconStor dürfen zu eigenen wissenschaftlichen Zwecken und zum Privatgebrauch gespeichert und kopiert werden. Sie dürfen die Dokumente nicht für öffentliche oder kommerzielle Zwecke vervielfältigen, öffentlich ausstellen, öffentlich zugänglich machen, vertreiben oder anderweitig nutzen. Sofern die Verfasser die Dokumente unter Open-Content-Lizenzen (insbesondere CC-Lizenzen) zur Verfügung gestellt haben sollten, gelten abweichend von diesen Nutzungsbedingungen die in der dort genannten Lizenz gewährten Nutzungsrechte. Terms of use: Documents in EconStor may be saved and copied for your personal and scholarly purposes. You are not to copy documents for public or commercial purposes, to exhibit the documents publicly, to make them publicly available on the internet, or to distribute or otherwise use the documents in public. If the documents have been made available under an Open Content Licence (especially Creative Commons Licences), you may exercise further usage rights as specified in the indicated licence.
We consider estimation of a linear or nonparametric additive model in which a few coefficients or... more We consider estimation of a linear or nonparametric additive model in which a few coefficients or additive components are "large" and may be objects of substantive interest, whereas others are "small" but not necessarily zero. The number of small coefficients or additive components may exceed the sample size. It is not known which coefficients or components are large and which are small. The large coefficients or additive components can be estimated with a smaller mean-square error or integrated mean-square error if the small ones can be identified and the covariates associated with them dropped from the model. We give conditions under which several penalized least squares procedures distinguish correctly between large and small coefficients or additive components with probability approaching 1 as the sample size increases. The results of Monte Carlo experiments and an empirical example illustrate the benefits of our methods.
This paper develops a new method for estimating a demand function and the welfare consequences of... more This paper develops a new method for estimating a demand function and the welfare consequences of price changes. The method is applied to gasoline demand in the U.S. and is applicable to other goods. The method uses shape restrictions derived from economic theory to improve the precision of a nonparametric estimate of the demand function. Using data from the U.S. National Household Travel Survey, we show that the restrictions are consistent with the data on gasoline demand and remove the anomalous behavior of a standard nonparametric estimator. Our approach provides new insights about the price responsiveness of gasoline demand and the way responses vary across the income distribution. We find that price responses vary nonmonotonically with income. In particular, we find that low-and high-income consumers are less responsive to changes in gasoline prices than are middle-income consumers. We find similar results using comparable data from Canada.
Berkson errors are commonplace in empirical microeconomics. In consumer demand this form of measu... more Berkson errors are commonplace in empirical microeconomics. In consumer demand this form of measurement error occurs when the price an individual pays is measured by the (weighted) average price paid by individuals in a specified group (e.g., a county), rather than the true transaction price. We show the importance of such measurement errors for the estimation of demand in a setting with nonseparable unobserved heterogeneity. We develop a consistent estimator using external information on the true distribution of prices. Examining the demand for gasoline in the U.S., we document substantial within-market price variability, and show that there are significant spatial differences in the magnitude of Berkson errors across regions of the U.S. Accounting for Berkson errors is found to be quantitatively important for estimating price effects and for welfare calculations. Imposing the Slutsky shape constraint greatly reduces the sensitivity to Berkson errors.
As a unified discipline, econometrics is still relatively young and has been transforming and exp... more As a unified discipline, econometrics is still relatively young and has been transforming and expanding very rapidly over the past few decades. Major advances have taken place in the analysis of cross sectional data by means of semi-parametric and non-parametric techniques. Heterogeneity of economic relations across individuals, firms and industries is increasingly acknowledged and attempts have been made to take them into account either by integrating out their effects or by modeling the sources of heterogeneity when suitable panel data exists. The counterfactual considerations that underlie policy analysis and treatment evaluation have been given a more satisfactory foundation. New time series econometric techniques have been developed and employed extensively in the areas of macroeconometrics and finance. Non-linear econometric techniques are used increasingly in the analysis of cross section and time series observations. Applications of Bayesian techniques to econometric problems have been given new impetus largely thanks to advances in computer power and computational techniques. The use of Bayesian techniques have in turn provided the investigators with a unifying framework where the tasks of forecasting, decision making, model evaluation and learning can be considered as parts of the same interactive and iterative process; thus paving the way for establishing the foundation of "real time econometrics". This paper attempts to provide an overview of some of these developments.
Monte Carlo experiments have shown that tests based on generalized-method-ofmoments estimators of... more Monte Carlo experiments have shown that tests based on generalized-method-ofmoments estimators often have true levels that differ greatly from their nominal levels when asymptotic critical values are used. This paper gives conditions under which the bootstrap provides asymptotic refinements to the critical values of t tests and the test of overidentifying restrictions. Particular attention is given to the case of dependent data. It is shown that with such data, the bootstrap must sample blocks of data and that the formulae for the bootstrap versions of test statistics differ from the formulae that apply with the original data. The results of Monte Carlo experiments on the numerical performance of the bootstrap show that it usually reduces the errors in level that occur when critical values based on first-order asymptotic theory are used. The bootstrap also provides an indication of the accuracy of critical values obtained from first-order asymptotic theory.
This paper develops a new method for estimating the demand function for gasoline and the deadweig... more This paper develops a new method for estimating the demand function for gasoline and the deadweight loss due to an increase in the gasoline tax. The method is also applicable to other goods. The method uses shape restrictions derived from economic theory to improve the precision of a nonparametric estimate of the demand function. Using data from the U.S. National Household Travel Survey, we show that the restrictions are consistent with the data on gasoline demand and remove the anomalous behavior of a standard nonparametric estimator. Our approach provides new insights about the price responsiveness of gasoline demand and the way responses vary across the income distribution. We reject constant elasticity models and find that price responses vary non-monotonically with income. In particular, we find that low-and high-income consumers are less responsive to changes in gasoline prices than are middle-income consumers.
In this paper we propose a novel method to construct confidence intervals in a class of linear in... more In this paper we propose a novel method to construct confidence intervals in a class of linear inverse problems. First, point estimators are obtained via a spectral cut-off method depending on a regularisation parameter α, that determines the bias of the estimator. Next, the proposed confidence interval corrects for this bias by explicitly estimating it based on a second regularisation parameter ρ, which is asymptotically smaller than α. The coverage error of the interval is shown to converge to zero. The proposed method is illustrated via two simulation studies, one in the context of func- tional linear regression, and the second one in the context of instrumental regression
This reprint differs from the original in pagination and typographic detail. 1 2 P. HALL AND J. H... more This reprint differs from the original in pagination and typographic detail. 1 2 P. HALL AND J. HOROWITZ a high degree of coverage accuracy or to produce bands that err on the side of conservatism. In this paper we suggest new, simple bootstrap methods for constructing confidence bands using conventional smoothing parameter choices. In particular, our approach does not require a nonstandard smoothing parameter. The basic algorithm requires only a single application of the bootstrap, although a more refined, double bootstrap technique is also suggested. The greater part of our attention is directed to regression problems, but we also discuss the application of our methods to constructing confidence bands for density functions. The resulting confidence regions depend on choice of two parameters α and ξ, in the range 0 < α, ξ < 1, and the methodology results in confidence bands that, asymptotically, cover the regression mean at x with probability at least 1 − α, for at least a proportion 1 − ξ of values of x. In particular, the bands are pointwise, rather than simultaneous. Pointwise bands are more popular with practitioners and are the subject of a substantial majority of research on nonparametric confidence bands for functions.
The proportional hazard model with unobserved heterogeneity gives the hazard function of a random... more The proportional hazard model with unobserved heterogeneity gives the hazard function of a random variable conditional on covariates and a second random variable representing unobserved heterogeneity. This paper shows how to estimate the baseline hazard function and the distribution of the unobserved heterogeneity nonparametrically. The baseline hazard function and heterogeneity distribution are assumed to satisfy smoothness conditions but are not assumed to belong to known, finite-dimensional, parametric families. Existing estimators assume that the baseline hazard function or heterogeneity distribution belongs to a known parametric family. Thus, the estimators presented here are more general than existing ones.
Testing exogeneity in nonparametric instrumental variables identified by conditional quantile res... more Testing exogeneity in nonparametric instrumental variables identified by conditional quantile restrictions cemmap working paper, No. CWP68/15
In recent years, major advances have taken place in three areas of random utility modeling: (1) s... more In recent years, major advances have taken place in three areas of random utility modeling: (1) semiparametric estimation, (2) computational methods for multinomial probit models, and (3) com-putational methods for Bayesian estimation. This paper summarizes these developments and dis-cusses their implications for practice. 1.
Many dependent variables of interest in economics and other social sciences can only take two val... more Many dependent variables of interest in economics and other social sciences can only take two values. The two possible outcomes are usually denoted by 0 and 1. Such variables are called dummy variables or dichotomous variables. Some examples: • The labor market status of a person. The variable takes the value 1 if a person is employed and 0 if he is unemployed. The values 1 and 0 can be assigned arbitrarily.
In this paper we propose a novel method to construct confidence intervals in a class of linear in... more In this paper we propose a novel method to construct confidence intervals in a class of linear inverse problems. First, point estimators are obtained via a spectral cutoff method depending on a regularisation parameter α, that determines the bias of the estimator. Next, the proposed confidence interval corrects for this bias by explicitly estimating it based on a second regularisation parameter ρ, which is asymptotically smaller than α. The coverage error of the interval is shown to converge to zero. The proposed method is illustrated via two simulation studies, one in the context of functional linear regression, and the second one in the context of instrumental regression.
A parameter of an econometric model is identified if there is a one-to-one or many-to-one mapping... more A parameter of an econometric model is identified if there is a one-to-one or many-to-one mapping from the population distribution of the available data to the parameter. Often, this mapping is obtained by inverting a mapping from the parameter to the population distribution. If the inverse mapping is discontinuous, then estimation of the parameter usually presents an ill-posed inverse problem. Such problems arise in many settings in economics and other fields in which the parameter of interest is a function. This article explains how ill-posedness arises and why it causes problems for estimation. The need to modify or regularize the identifying mapping is explained, and methods for regularization and estimation are discussed. Methods for forming confidence intervals and testing hypotheses are summarized. It is shown that a hypothesis test can be more precise in a certain sense than an estimator. An empirical example illustrates estimation in an ill-posed setting in economics.
Die Dokumente auf EconStor dürfen zu eigenen wissenschaftlichen Zwecken und zum Privatgebrauch ge... more Die Dokumente auf EconStor dürfen zu eigenen wissenschaftlichen Zwecken und zum Privatgebrauch gespeichert und kopiert werden. Sie dürfen die Dokumente nicht für öffentliche oder kommerzielle Zwecke vervielfältigen, öffentlich ausstellen, öffentlich zugänglich machen, vertreiben oder anderweitig nutzen. Sofern die Verfasser die Dokumente unter Open-Content-Lizenzen (insbesondere CC-Lizenzen) zur Verfügung gestellt haben sollten, gelten abweichend von diesen Nutzungsbedingungen die in der dort genannten Lizenz gewährten Nutzungsrechte. Terms of use: Documents in EconStor may be saved and copied for your personal and scholarly purposes. You are not to copy documents for public or commercial purposes, to exhibit the documents publicly, to make them publicly available on the internet, or to distribute or otherwise use the documents in public. If the documents have been made available under an Open Content Licence (especially Creative Commons Licences), you may exercise further usage rights as specified in the indicated licence.
We consider estimation of a linear or nonparametric additive model in which a few coefficients or... more We consider estimation of a linear or nonparametric additive model in which a few coefficients or additive components are "large" and may be objects of substantive interest, whereas others are "small" but not necessarily zero. The number of small coefficients or additive components may exceed the sample size. It is not known which coefficients or components are large and which are small. The large coefficients or additive components can be estimated with a smaller mean-square error or integrated mean-square error if the small ones can be identified and the covariates associated with them dropped from the model. We give conditions under which several penalized least squares procedures distinguish correctly between large and small coefficients or additive components with probability approaching 1 as the sample size increases. The results of Monte Carlo experiments and an empirical example illustrate the benefits of our methods.
Uploads
Papers by Joel Horowitz