This paper is devoted to the multivariate estimation of a vector of Poisson means. A novel loss f... more This paper is devoted to the multivariate estimation of a vector of Poisson means. A novel loss function that penalises bad estimates of each of the parameters and the sum (or equivalently the mean) of the parameters is introduced. Under this loss function, a class of minimax estimators that uniformly dominate the maximum likelihood estimator is derived. Crucially, these methods have the property that for estimating a given component parameter, the full data vector is utilised. Estimators in this class can be fine-tuned to limit shrinkage away from the maximum likelihood estimator, thereby avoiding implausible estimates of the sum of the parameters. Further light is shed on this new class of estimators by showing that it can be derived by Bayesian and empirical Bayesian methods. In particular, we exhibit a generalisation of the Clevenson-Zidek estimator, and prove its admissibility. Moreover, a class of prior distributions for which the Bayes estimators uniformly dominate the maximum likelihood estimator under the new loss function is derived. A section is included involving weighted loss functions, notably also leading to a procedure improving uniformly on the maximum likelihood method in an infinite-dimensional setup. Importantly, some of our methods lead to constructions of new multivariate models for both rate parameters and count observations. Finally, estimators that shrink the usual estimators towards a data based point in the parameter space are derived and compared.
Previous chapters have developed concepts and methodology pertaining to confidence distributions ... more Previous chapters have developed concepts and methodology pertaining to confidence distributions and related inference procedures. Some of these methods take the form of generally applicable recipes, via log-likelihood profiles, deviances and first-order large-sample approximations to the distribution of estimators of the focus estimands in question. Sometimes these recipes are too coarse and are in need of modification and perfection, however, which is the topic of the present chapter. We discuss methods based on mean and bias corrected deviance curves, t-bootstrapping, a certain acceleration and bias correction method, approximations via expansions, prepivoting and modified likelihood profiles. The extent to which these methods lead to improvements is also briefly illustrated and discussed. Introduction Uniformly most powerful exact inference is in the presence of nuisance parameters available only in regular exponential models for continuous data and other models with Neyman structure, as discussed and exemplified in Chapter 5. Exact confidence distributions exist in a wider class of models, but need not be canonical. The estimate of location based on the Wilcoxon statistic, for example, has an exact known distribution in the location model where only symmetry is assumed; see Section 11.4. In more complex models, the statistic on which to base the confidence distribution might be chosen on various grounds: the structure of the likelihood function, perceived robustness, asymptotic properties, computational feasibility, perspective and tradition of the study. In the given model, with finite data, it might be difficult to obtain an exact confidence distribution based on the chosen statistic. As we shall see there are various techniques available for obtaining approximate confidence distributions and confidence likelihoods, however, improving on the first-order ones worked with in Chapters 3–4. Bootstrapping, simulation and asymptotics are useful tools in calculating approximate confidence distributions and in characterising their power properties. When an estimator, often the maximum likelihood estimator of the interest parameter, is used as the statistic on which the confidence distribution is based, bootstrapping provides an estimate of the sampling distribution of the statistic. This empirical sampling distribution can be turned into an approximate confidence distribution in several ways, which we address in the text that follows.
This paper is devoted to the multivariate estimation of a vector of Poisson means. A novel loss f... more This paper is devoted to the multivariate estimation of a vector of Poisson means. A novel loss function that penalises bad estimates of each of the parameters and the sum (or equivalently the mean) of the parameters is introduced. Under this loss function, a class of minimax estimators that uniformly dominate the maximum likelihood estimator is derived. Crucially, these methods have the property that for estimating a given component parameter, the full data vector is utilised. Estimators in this class can be fine-tuned to limit shrinkage away from the maximum likelihood estimator, thereby avoiding implausible estimates of the sum of the parameters. Further light is shed on this new class of estimators by showing that it can be derived by Bayesian and empirical Bayesian methods. In particular, we exhibit a generalisation of the Clevenson-Zidek estimator, and prove its admissibility. Moreover, a class of prior distributions for which the Bayes estimators uniformly dominate the maximum likelihood estimator under the new loss function is derived. A section is included involving weighted loss functions, notably also leading to a procedure improving uniformly on the maximum likelihood method in an infinite-dimensional setup. Importantly, some of our methods lead to constructions of new multivariate models for both rate parameters and count observations. Finally, estimators that shrink the usual estimators towards a data based point in the parameter space are derived and compared.
Previous chapters have developed concepts and methodology pertaining to confidence distributions ... more Previous chapters have developed concepts and methodology pertaining to confidence distributions and related inference procedures. Some of these methods take the form of generally applicable recipes, via log-likelihood profiles, deviances and first-order large-sample approximations to the distribution of estimators of the focus estimands in question. Sometimes these recipes are too coarse and are in need of modification and perfection, however, which is the topic of the present chapter. We discuss methods based on mean and bias corrected deviance curves, t-bootstrapping, a certain acceleration and bias correction method, approximations via expansions, prepivoting and modified likelihood profiles. The extent to which these methods lead to improvements is also briefly illustrated and discussed. Introduction Uniformly most powerful exact inference is in the presence of nuisance parameters available only in regular exponential models for continuous data and other models with Neyman structure, as discussed and exemplified in Chapter 5. Exact confidence distributions exist in a wider class of models, but need not be canonical. The estimate of location based on the Wilcoxon statistic, for example, has an exact known distribution in the location model where only symmetry is assumed; see Section 11.4. In more complex models, the statistic on which to base the confidence distribution might be chosen on various grounds: the structure of the likelihood function, perceived robustness, asymptotic properties, computational feasibility, perspective and tradition of the study. In the given model, with finite data, it might be difficult to obtain an exact confidence distribution based on the chosen statistic. As we shall see there are various techniques available for obtaining approximate confidence distributions and confidence likelihoods, however, improving on the first-order ones worked with in Chapters 3–4. Bootstrapping, simulation and asymptotics are useful tools in calculating approximate confidence distributions and in characterising their power properties. When an estimator, often the maximum likelihood estimator of the interest parameter, is used as the statistic on which the confidence distribution is based, bootstrapping provides an estimate of the sampling distribution of the statistic. This empirical sampling distribution can be turned into an approximate confidence distribution in several ways, which we address in the text that follows.
Uploads
Papers by Nils Hjort