Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Next Article in Journal
Improvement of Statistical Performance of Ordinal Multiscale Entropy Techniques Using Refined Composite Downsampling Permutation Entropy
Next Article in Special Issue
A Two-Stage Approach for Bayesian Joint Models of Longitudinal and Survival Data: Correcting Bias with Informative Prior
Previous Article in Journal
Towards Generative Design of Computationally Efficient Mathematical Models with Evolutionary Learning
Previous Article in Special Issue
Baseline Methods for Bayesian Inference in Gumbel Distribution
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On Default Priors for Robust Bayesian Estimation with Divergences

by
Tomoyuki Nakagawa
1,* and
Shintaro Hashimoto
2
1
Department of Information Sciences, Tokyo University of Science, Chiba 278-8510, Japan
2
Department of Mathematics, Hiroshima University, Hiroshima 739-8521, Japan
*
Author to whom correspondence should be addressed.
Entropy 2021, 23(1), 29; https://doi.org/10.3390/e23010029
Submission received: 26 November 2020 / Revised: 18 December 2020 / Accepted: 23 December 2020 / Published: 27 December 2020
(This article belongs to the Special Issue Bayesian Inference and Computation)

Abstract

:
This paper presents objective priors for robust Bayesian estimation against outliers based on divergences. The minimum γ -divergence estimator is well-known to work well in estimation against heavy contamination. The robust Bayesian methods by using quasi-posterior distributions based on divergences have been also proposed in recent years. In the objective Bayesian framework, the selection of default prior distributions under such quasi-posterior distributions is an important problem. In this study, we provide some properties of reference and moment matching priors under the quasi-posterior distribution based on the γ -divergence. In particular, we show that the proposed priors are approximately robust under the condition on the contamination distribution without assuming any conditions on the contamination ratio. Some simulation studies are also presented.

1. Introduction

The problem of the robust parameter estimation against outliers has a long history. For example, Huber and Ronchetti [1] provided an excellent review of the classical robust estimation theory. It is well-known that the maximum likelihood estimator (MLE) is not robust against outliers because it is obtained by minimizing the Kullback–Leibler (KL) divergence between the true and empirical distributions. To overcome this problem, we may use other (robust) divergences instead of the KL divergence. The robust parameter estimation based on divergences has been one of the central topics in modern robust statistics (e.g., [2]). Such a method was firstly proposed by [3], who referred to it as the minimum density power divergence estimator. Reference [4] also proposed the “type 0 divergence”, which is a modified version of the density power divergence, and Reference [5] showed that it has good robustness properties. The type 0 divergence is also known as the γ -divergence, and statistical methods based on the γ -divergence have been presented by many authors (e.g., [6,7,8]).
In Bayesian statistics, the robustness against outliers is also an important issue, and divergence-based Bayesian methods have been proposed in recent years. Such methods are known as quasi-Bayes (or general Bayes) methods in some studies, and the corresponding posterior distributions are called quasi-posterior (or general posterior) distributions. To overcome the model misspecification problem (see [9]), the quasi-posterior distributions are based on a general loss function rather than the usual log-likelihood function. In general, such general loss functions may not depend on an assumed statistical model. However, in this study, we use loss functions that depend on the assumed model because we are interested in the robust estimation problem against outliers, that is the model is not misspecified, but the data generating distribution is wrong. In other words, we use divergences or scoring rules as a loss function for the quasi-posterior distribution (see also [10,11,12,13,14]). For example, Reference [10] used the Hellinger divergence. Reference [11] used the density power divergence. References [12,14] used the γ -divergence. In particular, the quasi-posterior distribution based on the γ -divergence was referred to as the γ -posterior in [12], and they showed that the γ -posterior has good robustness properties to overcome the problems in [11].
Although the selection of priors is an important issue in Bayesian statistics, we often have no prior information in some practical situations. In such cases, we may use priors called default or objective priors, and we should select an appropriate objective prior in a given context. In particular, we consider the reference and moment matching priors in this paper. The reference prior was firstly proposed by [15], and the moment matching prior was proposed by [16]. However, such objective priors generally depend on an unknown data generating distribution when we cannot assume that the contamination ratio is approximately zero. For example, if we assume the ε -contamination model (see, e.g., [1]) as a data generating distribution, many objective priors depend on the unknown contamination ratio and unknown contamination distribution because these objective priors involve the expectations under the data generating distribution. Although [17] derived some kinds of reference priors under the quasi-posterior distributions based on some kinds of scoring rules, they only discussed the robustness of such reference priors when the contamination ratio ε is approximately zero. Furthermore, their simulation studies largely depended on the assumption for the contamination ratio. In other words, they indirectly assumed that the contamination ratio ε is approximately zero. The current study derives the moment matching priors under the quasi-posterior distribution in a similar way as [16], and we show that the reference and moment matching priors based on the γ -divergence do not approximately depend on such unknown quantities under a certain assumption for the contamination distribution even if the contamination ratio is not small.
The rest of this paper is organized as follows. In Section 2, we review robust Bayesian estimation based on divergences referring to some previous studies. We derive moment matching priors based on the quasi-posterior distribution using an asymptotic expansion of the quasi-posterior distribution given by [17] in Section 3. Furthermore, we show that the reference and moment matching priors based on the γ -posterior do not depend on the contamination ratio and the contamination distribution. In Section 4, we compare the empirical bias and mean squared error of posterior means through some simulation studies. Some discussion about the selection of tuning parameters is also provided.

2. Robust Bayesian Estimation Using Divergences

In this section, we review the framework of robust estimation in the seminal paper by Fujisawa and Eguchi [5], and we introduce the robust Bayesian estimation using divergences. Let X 1 , , X n be independent and identically distributed (iid) random variables according to a distribution G with the probability density function g on Ω , and let X n = ( X 1 , , X n ) . We assume the parametric model f θ = f ( x , θ ) ( θ Θ R p ) and consider the estimation problem for θ .
Then, the γ -divergence between two probability densities g and f is defined by:
D γ ( g , f θ ) = 1 γ ( γ + 1 ) log Ω g ( x ) 1 + γ d x 1 γ log Ω g ( x ) f θ ( x ) γ d x + 1 γ + 1 log Ω f θ ( x ) 1 + γ d x ,
where γ > 0 is a tuning parameter on robustness. We also define the γ -cross-entropy as:
d γ ( g , f θ ) = 1 γ log Ω g ( x ) f θ ( x ) γ d x + 1 γ + 1 log Ω f θ ( x ) 1 + γ d x
(see [4,5]).

2.1. Framework of Robustness

Fujisawa and Eguchi [5] introduced a new framework of robustness, which is different from the classical one. When some of the data values are regarded as outliers, we need a robust estimation procedure. Typically, an observation that takes a large value is regarded as an outlier. Under this convention, many robust parameter estimation procedures have been proposed to reduce the bias caused by an outlier. An influence function is one of the methods to measure the sensitivity of models against outliers. It is known that the bias of an estimator is approximately proportional to the influence function when the contamination ratio ε is small. However, when ε is not small, the bias cannot be approximately proportional to the influence function. Reference [5] showed that the likelihood function based on the γ -divergence gives a sufficiently small bias under heavy contamination. Suppose that observations are generated from a mixture distribution g ( x ) = ( 1 ε ) f ( x ) + ε δ ( x ) , where f ( x ) is the underlying density, δ ( x ) is another density function, and ε is the contamination ratio. In Section 3, we assume that the condition:
ν f = Ω δ ( x ) f ( x ) γ 0 d x 1 / γ 0 0
holds for a constant γ 0 > 0 (see [5]). When x 0 is generated from δ ( x ) , we call x 0 the outlier. We note that we do not assume that the contamination ratio ε is sufficiently small. This condition means that the contamination distribution δ ( x ) mostly lies on the tail of the underlying density f ( x ) . In other words, for an outlier x 0 , it holds that f ( x 0 ) 0 . We note that the condition (1) is also a basis to prove the robustness against outliers for the minimum γ -divergence estimator in [5]. Furthermore, Reference [18] provided some theoretical results for the γ -divergence, and related works in the frequentist setting have been also developed (e.g., [6,7,8], and so on).
In the rest of this section, we give a brief review of the general Bayesian updating and introduce some previous works that are closely related to this paper.

2.2. General Bayesian Updating

We consider the same framework as [9,13]. We are interested in θ = θ ( G ) ( θ Θ R p ), and we define a loss function θ ( x ) : = ( θ , x ) . Further, let θ * = arg min θ Θ E G θ ( X ) be the target parameter. We define the risk function by E G θ ( X ) , and its empirical risk is defined by R n ( θ ) = ( 1 / n ) i = 1 n θ ( X i ) . For the prior distribution π ( θ ) , the quasi-posterior density is defined by:
π n , ω ( θ ) exp { ω n R n ( θ ) } π ( θ ) ,
where ω > 0 is a tuning parameter called the learning rate. We note that the quasi-posterior is also called the general posterior or Gibbs posterior. In this paper, we fix ω = 1 for the same reason as [13]. For example, if we set μ ( x ) = | x μ | , we can estimate the median of the distribution without assuming the statistical model. However, we consider the model-dependent loss function, which is based on statistical divergence (or the scoring rule) in this study (see also [11,12,13,14]). The unified framework of inference using the quasi-posterior distribution was discussed by [9].

2.3. Assumptions and Previous Works

Let d ( · , · ) be a cross-entropy induced by a divergence, and let { f θ : θ Θ } be a statistical model. In general, the quasi-posterior distribution based on the cross-entropy is defined by:
π ( d ) ( θ | X n ) exp n d ( g ¯ , f θ ) π ( θ ) = exp i = 1 n q ( d ) ( X i ; θ ) π ( θ ) ,
where d ( g ¯ , f θ ) is the empirically estimated cross-entropy and g ¯ is the empirical density function. In robust statistics based on divergences, we may use the cross-entropy induced by a robust divergence (e.g., [3,4,5]). In this paper, we mainly use the γ -cross-entropy proposed by [4,5]. Recently, Reference [12] proposed the γ -posterior based on the monotone transformation of the γ -cross-entropy:
d ˜ γ ( g , f θ ) = 1 γ exp ( γ d γ ( g , f θ ) ) 1 = 1 γ Ω g ( x ) f θ ( x ) γ d x Ω f θ ( x ) 1 + γ d x γ / ( 1 + γ ) + 1 γ
for γ > 0 . The γ -posterior is defined by taking d ( g ¯ , f θ ) = d ˜ γ ( g ¯ , f θ ) in (2). On the other hand, Reference [11] proposed the R ( α ) -posterior based on the density power cross-entropy:
d α ( g , f θ ) = 1 α g f θ α d x + 1 1 + α log Ω f θ 1 + α d x
for α > 0 . The R ( α ) -posterior is defined by taking d ( g ¯ , f θ ) = d α ( g ¯ , f θ ) in (2). Note that cross-entropies d α ( · , · ) and d ˜ γ ( · , · ) converge to the negative log-likelihood function as α 0 and γ 0 , respectively. Hence, we can establish that they are some kind of generalization of the negative log-likelihood function. It is known that the posterior mean based on the R ( α ) -posterior works well for the estimation of a location parameter in the presence of outliers. However, this is known to be unstable in the case of the estimation for a scale parameter (see [12]). Nakagawa and Hashimoto [12] showed that the posterior mean under the γ -posterior has a small bias under heavy contamination for both location and scale parameters in some simulation studies.
Let θ g : = arg min θ Θ d ( g , f θ ) be the target parameter. We now assume the following regularity conditions on the density function f θ ( x ) = f ( x ; θ ) ( θ Θ R p ) . We use indices to denote derivatives of D ¯ ( θ ) = d ( g ¯ , f θ ) with respect to the components of the parameter θ . For example, D ¯ i j k ( θ ) = i j k D ¯ ( θ ) and D ¯ i j k ( θ ) = i j k D ¯ ( θ ) for i , j , k , = 1 , , p .
(A1)
The support of the density function does not depend on unknown parameter θ , and f θ is fifth-order differentiable with respect to θ in neighborhood U of θ g .
(A2)
The interchange of the order of integration with respect to x and differentiation as θ g is justified. The expectations:
E g [ i j k q ( d ) ( X 1 ; θ g ) ] and E g [ i j k q ( d ) ( X 1 ; θ g ) ]
are all finite, and M i j k s ( x ) exists such that:
sup θ U i j k s q ( d ) ( x ; θ ) M i j k s ( x )
and E g M i j k s ( X 1 ) < for all i , j , k , , s = 1 , , p , where i = / θ i and = / θ , while E g ( · ) is the expectation of X with respect to a probability density function g.
(A3)
For any δ > 0 , with probability one:
sup θ θ g > δ d ( g ¯ , f θ g ) d ( g ¯ , f θ ) < ε
for some ε > 0 and for all sufficiently large n.
The matrices I ( d ) ( θ ) and J ( d ) ( θ ) are defined by:
I ( d ) ( θ ) = E g q ( d ) ( X 1 ; θ ) q ( d ) ( X 1 ; θ ) , J ( d ) ( θ ) = E g q ( d ) ( X 1 ; θ ) ,
respectively. We also assume that I ( d ) ( θ ) and J ( d ) ( θ ) are positive definite matrices. Under these conditions, References [11,12] discussed several asymptotic properties of the quasi-posterior distributions and the corresponding posterior means.
In terms of the higher order asymptotic theory, Giummolè et al. [17] derived the asymptotic expansion of such quasi-posterior distributions. We now introduce the notation that will be used in the rest of the paper. Reference [17] presented the following theorem.
Theorem 1 
(Giummolè et al. [17]). Under the conditions (A1)–(A3), we assume that θ ^ n ( d ) is a consistent solution of d ( g ¯ , f θ ) = 0 and θ ^ n ( d ) p θ g as n . Then, for any prior density function π ( θ ) that is third-order differentiable and positive at θ g , it holds that:
π * ( d ) ( t n | X n ) = ϕ t n ; J ˜ 1 1 + n 1 / 2 A 1 ( t n ) + n 1 A 2 ( t n ) + O p ( n 3 / 2 ) ,
where π * ( d ) ( t n | X n ) is the quasi-posterior density function of the normalized random variable t n = ( t 1 , , t p ) = n ( θ θ ^ n ( d ) ) given X n , ϕ ( · ; A ) is the density function of a p-variate normal distribution with a zero mean vector and covariance matrix A, J ˜ = J ( d ) ( θ ^ n ( d ) ) , J ˜ 1 = ( J ˜ i j ) , and:
A 1 ( t n ) = i = 1 p i π ( θ ^ n ( d ) ) π ( θ ^ n ( d ) ) t i + 1 6 i , j , k D ¯ i j k ( θ ^ n ( d ) ) t i t j t k , A 2 ( t n ) = i , j 1 2 i j π ( θ ^ n ( d ) ) π ( θ ^ n ( d ) ) ( t i t j J ˜ i j ) i , j , k , 1 6 i π ( θ ^ n ( d ) ) π ( θ ^ n ( d ) ) D ¯ j k ( θ ^ n ( d ) ) t i t j t k t 3 J ˜ i j J ˜ k i , j , k , 1 24 D ¯ i j k ( θ ^ n ( d ) ) t i t j t k t 3 J ˜ i j J ˜ k + i , j , k , h , g , f 1 72 D ¯ i j k D ¯ h g f ( 2 t i t j t k t h t g t f 15 J ˜ i j J ˜ k h J ˜ g f ) .
Proof. 
The proof is given in the Appendix A of [17]. □
As previously mentioned, quasi-posterior distributions depend on the cross-entropy induced by a divergence and a prior distribution. If we have some information about unknown parameters θ , we can use a prior distribution that takes such prior information into account. However, in the absence of prior information, we often use prior distributions known as default or objective priors. Reference [17] proposed the reference prior for quasi-posterior distributions, which is a type of objective prior (see [15]). The reference prior π R is obtained by asymptotically maximizing the expected KL divergence between prior and posterior distributions. As a generalization of the reference prior, Reference [19] discussed such priors under a general divergence measure known as the α -divergence (see also [20,21]). The reference prior under the α -divergence is given by asymptotically maximizing the expected α -divergence:
H ( π ) = E [ D ( α ) ( π ( d ) ( θ | X n ) , π ( θ ) ) ] ,
where D ( α ) is the α -divergence defined as:
D ( α ) ( π ( d ) ( θ | X n ) , π ( θ ) ) = 1 α ( 1 α ) Θ 1 π ( θ ) π ( d ) ( θ | X n ) α π ( d ) ( θ | X n ) d θ
which corresponds to the KL divergence as α 0 , the Hellinger divergence for α = 1 / 2 , and the χ 2 -divergence for α = 1 . Reference [17] derived reference priors with the α -divergence under the quasi-posterior based on some kinds of proper scoring rules such as the Tsallis scoring rule and the Hyvärinen scoring rule. We note that the former rule is the same as the density power score of [3] with minor notational modifications.
Theorem 2 
(Giummolè et al. [17]). When | α | < 1 , the reference prior that asymptotically maximizes the expected α-divergence between the quasi-posterior and prior distributions is given by:
π R ( θ ) det ( J ( d ) ( θ ) ) 1 / 2 .
The result of Theorem 2 is similar to that of [19,20]. Objective priors such as the above theorem are useful because they can be determined by the data generating model. However, such priors do not have a statistical guarantee when the model is misspecified such as Huber’s ε -contamination model. In other words, the reference prior in Theorem 2 depends on data generating distribution g because of J ( d ) ( θ ) = E g q ( d ) ( X 1 ; θ ) , where g ( x ) = ( 1 ε ) f θ ( x ) + ε δ ( x ) when the contamination ratio ε is not small such as for heavy contamination cases. We now consider some objective priors under the γ -posterior, which is robust against such unknown quantities, in the next section.

3. Main Results

In this section, we show our main results. Our contributions are as follows. We derive moment matching priors for quasi-posterior distributions (Theorem 3). We prove that the proposed priors are robust under the condition on the tail of the contamination distribution (Theorem 4).

3.1. Moment Matching Priors

The moment matching priors proposed by [16] are priors that match the posterior mean and MLE up to the higher order (see also [22]). In this section, we attempt to extend the results of [16] to the context of quasi-posterior distributions. Our goal is to identify a prior such that the difference between the quasi-posterior mean θ ˜ n ( d ) and frequentist minimum divergence estimator θ ^ n ( d ) converges to zero up to the order of o ( n 1 ) . From Theorem 1, we have the following theorem.
Theorem 3 
Let θ ˜ n ( d ) = ( θ ˜ 1 , , θ ˜ p ) , θ ^ n ( d ) = ( θ ^ 1 , , θ ^ p ) , and t n = ( t 1 , , t p ) = n ( θ θ ^ n ( d ) ) . Under the same assumptions as Theorem 1, it holds that:
n θ ˜ ( d ) θ ^ ( d ) p i = 1 p i π ( θ g ) π ( θ g ) J i + 1 6 i , j , k g i j k ( d ) ( θ g ) J i j J k + J i k J j + J i J j k
as n , where J = J ( d ) ( θ g ) , J 1 = ( J i j ) , and g i j k ( d ) ( θ ) = E g i j k q ( d ) ( X 1 ; θ ) . Furthermore, if we set a prior that satisfies:
π ( θ ) π ( θ ) + 1 2 i , j g i j ( d ) ( θ ) J i j ( θ ) = 0
for all = 1 , , p , then it holds that:
n θ ˜ ( d ) θ ^ ( d ) p 0
for = 1 , , p as n , where { J ( d ) ( θ ) } 1 = ( J i j ( θ ) ) .
Hereafter, the prior that satisfies Equation (4) up to the order of o p ( n 1 ) for all = 1 , , p is referred to as a moment matching prior, and we denote it by π M .
Proof. 
From the asymptotic expansion of the posterior density (3), we have the asymptotic expansion of the posterior mean for θ as:
θ ˜ ( d ) = Θ θ π ( d ) ( θ | X n ) d θ = θ ^ ( d ) + 1 n R p t π * ( d ) ( t n | X n ) d t n = θ ^ ( d ) + 1 n R p t ϕ t n ; J ˜ 1 A 1 ( t n ) d t n + O p ( n 3 / 2 )
for = 1 , , p . The integral in the above equation is calculated by:
R p t A 1 ( t n ) ϕ t n ; J ˜ 1 d t n = i = 1 p i π ( θ ^ n ( d ) ) π ( θ ^ n ( d ) ) R p t i t ϕ t n ; J ˜ 1 d t n + 1 6 i , j , k D ¯ i j k ( θ ^ n ( d ) ) R p t i t j t k t ϕ t n ; J ˜ 1 d t n = i = 1 p i π ( θ ^ n ( d ) ) π ( θ ^ n ( d ) ) J ˜ i + 1 6 i , j , k D ¯ i j k ( θ ^ n ( d ) ) J ˜ i j J ˜ k + J ˜ i k J ˜ j + J ˜ i J ˜ j k + o p ( 1 ) .
From (5) and (6), we have:
θ ˜ ( d ) θ ^ ( d ) = i = 1 p i π ( θ ^ n ( d ) ) π ( θ ^ n ( d ) ) J ˜ i + 1 6 n i , j , k D ¯ i j k ( θ ^ n ( d ) ) J ˜ i j J ˜ k + J ˜ i k J ˜ j + J ˜ i J ˜ j k + O p ( n 3 / 2 )
for = 1 , , p . By using the consistency of the estimator θ ^ n ( d ) , we then have the following asymptotic difference between θ ^ ( d ) and θ ^ ( d ) :
n θ ˜ ( d ) θ ^ ( d ) p i = 1 p i π ( θ g ) π ( θ g ) J i + 1 6 i , j , k g i j k ( d ) ( θ g ) J i j J k + J i k J j + J i J j k
as n for = 1 , , p . □
In general, it is not easy to obtain the moment matching priors explicitly. Two examples are given as follows.
Example 1.
When p = 1 , the moment matching prior is given by:
π M ( θ ) = C exp θ g 3 ( d ) ( t ) 2 J ( d ) ( t ) d t
for a constant C, where g 3 is a third derivation of g. This prior is very similar to that of [16], but the quantities g 3 ( d ) ( t ) and J ( d ) ( t ) are different from it.
Example 2.
When p = 2 , we put:
u ( θ 1 , θ 2 ) = i , j g i j ( d ) ( θ ) J i j ( θ ) ( = 1 , 2 ) ,
where θ = ( θ 1 , θ 2 ) . If u ( θ 1 , θ 2 ) only depends on θ for all = 1 , 2 and does not depend on other parameters θ k ( k ) , we have:
u 1 ( θ 1 , θ 2 ) u 1 ( θ 1 ) , u 2 ( θ 1 , θ 2 ) u 2 ( θ 2 ) .
Then, we can solve the differential equation given by (4), and the moment matching prior is obtained by
π M ( θ 1 , θ 2 ) exp 1 2 θ 1 u 1 ( t 1 ) d t 1 exp 1 2 θ 2 u 2 ( t 2 ) d t 2 .

3.2. Robustness of Objective Priors

For data that may be heavily contaminated, we cannot assume that the contamination ratio ε is approximately zero. In general, reference and moment matching priors depend on the contamination ratio and distribution. Therefore, we cannot directly use such objective priors for the quasi-posterior distributions because the contamination ratio ε and the contamination distribution δ ( x ) are unknown. In this subsection, we prove that priors based on the γ -divergence are robust against these unknown quantities. In addition to (1), we assume the following condition of the contamination distribution:
ν θ = Ω δ ( x ) f θ ( x ) γ 0 d x 1 / γ 0 0
for all θ Θ and an appropriately large constant γ 0 > 0 (see also [5]). Note that the assumption (7) is also a basis to prove the robustness against outliers for the minimum γ -divergence estimator in [5]. Then, we have the following theorem.
Theorem 4 
Assume the condition (7). Let:
q ( γ ) ( x ; θ ) : = q ( d ˜ γ ) ( x ; θ ) = 1 γ f θ ( x ) γ Ω f θ ( y ) 1 + γ d y γ / ( 1 + γ ) ,
and let:
h i j ( γ ) ( θ ) = E f θ i j q ( γ ) ( X 1 ; θ ) , g ˜ i j k ( γ ) ( θ ) = E f θ i j k q ( γ ) ( X 1 ; θ ) .
Then, it holds that:
J i j ( γ ) ( θ ) = E g i j q ( γ ) ( X 1 ; θ ) = ( 1 ε ) h i j ( γ ) ( θ ) + O ( ε ν γ ) , g i j k ( γ ) ( θ ) = E g i j k q ( γ ) ( X 1 ; θ ) = ( 1 ε ) g ˜ i j k ( γ ) ( θ ) + O ( ε ν γ ) ,
for γ + 1 γ 0 , where ν : = max { ν f , sup θ Θ ν θ } . The notation O ( ε ν γ ) is the same use as that of [5]. Furthermore, from the above results, the reference prior and Equation (4) are approximately given by:
π R ( θ ) det H ( γ ) ( θ ) 1 / 2 , π ( θ ) π ( θ ) + 1 2 i , j g ˜ i j ( γ ) ( θ ) h i j ( θ ) = 0 ,
where H ( γ ) ( θ ) = ( h i j ( γ ) ( θ ) ) and { H ( γ ) ( θ ) } 1 = ( h i j ( θ ) ) .
Proof. 
Put ( x ) = log f θ ( x ) , i ( x ) = i log f θ ( x ) , i j ( x ) = i j log f θ ( x ) and i j k ( x ) = i j k log f θ ( x ) . First, from Hölder’s inequality and Lyapunonv’s inequality, it holds that:
Ω δ ( x ) f θ ( x ) γ i ( x ) d x ν γ Ω | i ( x ) | 1 + γ δ ( x ) d x 1 / ( 1 + γ ) , Ω δ ( x ) f θ ( x ) γ i ( x ) j ( x ) d x ν γ Ω | i ( x ) j ( x ) | 1 + γ δ ( x ) d x 1 / ( 1 + γ ) , Ω δ ( x ) f θ ( x ) γ i j ( x ) d x ν γ Ω | i j ( x ) | 1 + γ δ ( x ) d x 1 / ( 1 + γ ) , Ω δ ( x ) f θ ( x ) γ i j k ( x ) d x ν γ Ω | i j k ( x ) | 1 + γ δ ( x ) d x 1 / ( 1 + γ ) , Ω δ ( x ) f θ ( x ) γ i j ( x ) k ( x ) d x ν γ Ω | i j ( x ) k ( x ) | 1 + γ δ ( x ) d x 1 / ( 1 + γ ) , Ω δ ( x ) f θ ( x ) γ i ( X ) j ( X 1 ) k ( X 1 ) d x ν γ Ω | i ( x ) j ( x ) k ( x ) | 1 + γ δ ( x ) d x 1 / ( 1 + γ )
for i , j , k = 1 , , p . Using (10) and the results in Appendix A, we have:
Ω δ ( x ) i j q ( γ ) ( x ; θ ) d x f θ γ + 1 γ γ Ω δ ( x ) f θ ( x ) γ i ( x ) j ( x ) d x + f θ γ + 1 γ Ω δ ( x ) f θ ( x ) γ i j ( x ) d x + γ S i f θ γ + 1 1 2 γ Ω δ ( x ) f θ ( x ) γ j ( x ) d x + γ S j f θ γ + 1 1 2 γ Ω δ ( x ) f θ ( x ) γ i ( x ) d x + ( 1 + 2 γ ) f θ γ + 1 2 + 3 γ S i S j Ω δ ( x ) f θ ( x ) γ d x + f θ γ + 1 1 2 γ Ω δ ( x ) f θ ( x ) γ d x Ω f θ ( y ) γ + 1 s i j ( y ) d y , = O ( ν γ ) ,
where:
s i j ( y ) = ( γ + 1 ) i ( y ) j ( y ) + i j ( y ) , S i = Ω f θ ( y ) γ + 1 i ( y ) d y
for i , j = 1 , , p . Similarly, it also holds that:
δ ( x ) i j k q ( γ ) ( x ; θ ) d x = O ( ν γ )
for i , j , k = 1 , , p . Since,
J i j ( γ ) ( θ ) = E g i j q ( γ ) ( X 1 ; θ ) = ( 1 ε ) h i j ( γ ) ( θ ) ε Ω δ ( x ) i j q ( γ ) ( x ; θ ) d x , g i j k ( γ ) ( θ ) = E g i j k q ( γ ) ( X 1 ; θ ) = ( 1 ε ) g ˜ i j k ( γ ) ( θ ) + ε δ ( x ) i j k q ( γ ) ( x ; θ ) d x ,
the proof of (8) is complete. It is also easy to see the result of (9) from (8). □
It should be noted that (8) looks like the results for Theorem 5.1 in [5]. However, q ( γ ) ( x ; θ ) , and its derivative functions are different formulae from those of [5], so that the derivative functions and the proof of (8) are given in the Appendix A. Theorem 4 shows that expectations on the right-hand side of J i j ( γ ) ( θ ) and g i j k ( γ ) ( θ ) only depend on the underlying model f θ , but do not depend on the contamination distribution. Furthermore, reference and moment matching priors for the γ -posterior are obtained by the parametric model f θ , that is, these do not depend on the contamination ratio and the contamination distribution. For example, for a normal distribution N ( μ , σ 2 ) , reference and moment matching priors are given by:
π R ( γ ) ( μ , σ ) = σ 3 + 1 / ( 1 + γ ) + O ( ε ν γ ) , π M ( γ ) ( μ , σ ) = σ ( γ + 7 ) / { 2 ( 1 + γ ) } + O ( ε ν γ ) .
However, reference and moment matching priors under the R ( α ) -posterior depend on unknown quantities in the data generating distribution unless ε 0 , since J i j ( α ) ( θ ) and g i j k ( α ) ( θ ) have the following forms:
J i j ( α ) ( θ ) = E g i j q ( α ) ( X 1 ; θ ) = ( 1 ε ) E f θ i j q ( α ) ( X 1 ; θ ) ε 1 + α Ω i j f θ ( x ) 1 + α d x + O ( ε ν α ) , g i j k ( α ) ( θ ) = E g i j k q ( α ) ( X 1 ; θ ) = ( 1 ε ) E f θ i j k q ( α ) ( X 1 ; θ ) + ε 1 + α Ω i j k f θ ( x ) 1 + α d x + O ( ε ν α ) ,
where:
q ( α ) ( x ; θ ) : = q ( d α ) ( x ; θ ) = 1 α f θ ( x ) α 1 1 + α Ω f θ ( y ) 1 + α d y .
The priors given by (11) can be practically used under the condition (7) even if the contamination ratio ε is not small.

4. Simulation Studies

4.1. Setting and Results

We present the performance of posterior means under reference and moment matching priors through some simulation studies. In this section, we assume that the parametric model is the normal distribution with mean μ and variance σ 2 and consider the joint estimation problem for μ and σ 2 . We assume that the true values of μ and σ 2 are zero and one, respectively. We also assume that the contamination distribution is the normal distribution with mean ν and variance one. In other words, the data generating distribution is expressed by:
g ( x ) = ( 1 ε ) N ( 0 , 1 ) + ε N ( ν , 1 ) ,
where ε is the contamination ratio and n is the sample size. We compare the performances of estimators in terms of empirical bias and mean squared error (MSE) among three methods, which include the ordinary KL divergence-based posterior, R ( α ) -posterior, and γ -posterior (our proposal). We also employ three prior distributions for ( μ , σ ) , namely (i) uniform prior, (ii) reference prior, and (iii) moment matching prior.
Since exact calculations of posterior means are not easy, we use the importance sampling Monte Carlo algorithm using the proposal distributions N ( x ¯ , s 2 ) for μ and IG ( 6 , 5 s ) for σ (the inverse gamma distribution with parameters a and b is denoted by IG ( a , b ) ), where x ¯ = n 1 i = 1 n x i and s 2 = ( n 1 ) 1 i = 1 n ( x i x ¯ ) 2 (for the details of the importance sampling, see, e.g., [23]). We carry out the importance sampling with 10,000 steps, and we compute the empirical bias and MSE for posterior means ( μ ^ , σ ^ ) of ( μ , σ ) by 10,000 iterations. The simulation results are reported in Table 1, Table 2, Table 3 and Table 4. The reference and the moment matching priors for the γ -posterior are given by (11), and those for the R ( α ) -posterior are “formally” given as follows:
π M ( α ) ( μ , σ ) σ 2 α , π M ( α ) ( μ , σ ) σ C M / 2 ,
where C M is a constant given by:
C M = 2 + α 2 ( 1 + α ) + α ( 1 + α ) 3 ( 2 + α ) + ( 10 α 2 ( 2 + α ( 5 + α ( 3 + α ) ) ) ) π α / 2 ( 1 + α ) ( α ( 1 + α ) 2 + ( 2 + α + α 2 + α 3 ) π α / 2 ) .
The term “formally” means that since the reference and the moment matching priors for the R ( α ) -posterior strictly depend on an unknown contamination ratio and contamination distribution, we set ε = 0 in these priors. On the other hand, our proposed objective priors do not need such an assumption, but we assume only the condition (7). We note that [17] also used the same formal reference prior in their simulation studies.
The simulation results of the empirical bias and MSE of posterior means of μ and σ are provided by Table 1, Table 2, Table 3 and Table 4. We consider three prior distributions for ( μ , σ ) , namely uniform, reference, and moment matching priors. In these tables, we set ν = 6 , ε = 0.00 , 0.05 , 0.20 and n = 20 , 50 , 100 . We also set the tuning parameters for the R ( α ) - and γ -posteriors as 0.2 , 0.3 , 0.5 , 0.7 .
Table 1 and Table 3 show the empirical bias and MSE of the posterior means of mean parameter μ based on the standard posterior and the R ( α ) - and γ -posteriors. The empirical bias and MSE for the two robust methods are smaller than those of the standard posterior mean (denoted by “Bayes” in Table 1, Table 2, Table 3 and Table 4) in the presence of outliers for a large sample size. When there are no outliers ( ε = 0 ), it seems that the three methods are comparable. On the other hand, when ε = 0.05 and ε = 0.20 , the standard posterior mean gets worse, while the performances of the posterior means based on the R ( α ) -posterior and the γ -posterior are comparable for both empirical bias and MSE.
We also present the results of the estimation for variance parameter σ in Table 2 and Table 4. When there are no outliers, the performances of robust Bayes estimators under the uniform prior are slightly worse. On the other hand, the reference and moment matching priors provide relatively reasonable results even if the sample size is small and ε = 0 . The empirical bias and MSE of the R ( α ) -posterior and the γ -posterior means for α , γ = 0.5 , 0.7 remain small even if the contamination ratio ε is not small. In particular, the empirical bias and MSE of the γ -posterior means for σ are shown to be drastically smaller than those of the R ( α ) -posterior.
Figure 1 shows the results of the empirical bias and MSE of the posterior means of μ and σ under the uniform, reference, and moment matching priors when ν = 6 (fixed) and the contamination ratio ε varies from 0.00 to 0.30 . In all cases, we can find that the standard posterior means (i.e., cases α , γ = 0 ) do not work well. For the estimation of mean parameter μ , the R ( α ) - and γ -posterior means seems to be reasonable for the value of ε between 0.0 and 0.20 . In particular, the γ -posterior means under reference and moment matching priors have better performance even if ε = 0.30 . For the estimation of variance parameter σ , the R ( α ) -posterior means under the uniform prior have larger bias and MSE than the other methods. The γ -posterior mean with γ = 1.0 still may be better than other competitors for any ε [ 0 , 0.30 ] . For α , γ = 0.5 , the R ( α ) - and γ -posterior means seem to be comparable.
Figure 2 also presents the results of the empirical bias and MSE of the posterior means of μ and σ under the same priors as Figure 1 when the contamination ratio is ε = 0.20 (fixed) and ν varies from 0.0 to 10.0 . For the estimation of mean parameter μ in Figure 2, the empirical bias and MSE for the robust estimators seem to be nice regardless of ν except for the case of the R ( α ) -posterior under the uniform prior. Although we can find that some differences appear near ν = 4 , the γ -posterior means with γ = 1.0 have better performance for the estimation of both mean μ and variance σ for all ν [ 0 , 10 ] .
In these simulation studies, the γ -posterior mean under the reference and moment matching priors seems to have better performance for the joint estimation of ( μ , σ ) in most scenarios. Although we provide the results for the univariate normal distribution, the other distribution (including the multivariate distribution) should be also considered in the future.
2

4.2. Selection of Tuning Parameters

The selection of a tuning parameter γ (or α ) is very challenging, and to the best of our knowledge, there is no optimal choice of γ . The tuning parameter γ controls the degree of robustness, that is, if we set large γ , we obtain higher robustness. However, there is a trade-off between the robustness and efficiency of estimators. One of the solutions for this problem is to use the asymptotic relative efficiency (ARE) (see, e.g., [11]). It should be noted that [11] only dealt with a one parameter case. In general, the asymptotic relative efficiency of the robust posterior mean θ ^ ( γ ) of p-dimensional parameter θ relative to the usual posterior mean θ ^ is defined by:
ARE ( θ ^ ( γ ) , θ ^ ) : = det V ( θ ) det V ( γ ) ( θ ) 1 / p
(see, e.g., [24]). This is the ratio of the determinants of the covariance matrices, raised to the power of 1 / p , where p is the dimension of the parameter θ . We now calculate the ARE ( θ ^ ( γ ) , θ ^ ) in our simulation setting. After some calculations, the asymptotic relative efficiency is given by:
ARE ( θ ^ ( γ ) , θ ^ ) = 2 ( 1 + γ ) 6 ( 1 + 2 γ ) ( 2 + 4 γ + 3 γ 2 ) 1 / 2 = : h ( γ )
for γ > 0 . We note that it holds h ( γ ) 1 as γ 0 . Hence, we may be able to choose γ to allow for the small inflation of the efficiency. For example, if we require the value of the asymptotic relative efficiency ARE = 0.95 , we may choose the value of γ as the solution of the equation h ( γ ) = 0.95 (see Table 5). The curve of the function h ( γ ) is also given in Figure 3. Several authors have provided methods for the selection of the tuning parameters (e.g., [25,26,27]). Reference [5] focused on the reduction of the latent bias of the estimator, and they recommended setting γ = 1 for the normal mean-variance estimation problem; however, it seems to be unreasonable in terms of the asymptotic relative efficiency (see Table 5 and Figure 3). To the best of our knowledge, there is no method that is robust and efficient under the heavy contamination setting. Hence, other methods that have higher efficiency under heavy contamination should be considered in the future.

5. Concluding Remarks

We consider objective priors for divergence-based robust Bayesian estimation. In particular, we prove that the reference and moment matching priors under quasi-posterior based on the γ -divergence are robust against unknown quantities in a data generating distribution. The performance of the corresponding posterior means is illustrated through some simulation studies. However, the proposed objective priors are often improper, and showing their posterior propriety remains as future research. Our results should be extended to other settings. For example, Kanamori and Fujisawa [28] proposed the estimation of the contamination ratio using an unnormalized model. Examining such a problem from the Bayesian perspective is also challenging because there is the problem of how to set a prior distribution for an unknown contamination ratio. Furthermore, it would also be interesting to consider an optimal data-dependent choice of tuning parameter γ .

Author Contributions

T.N. and S.H. contributed to the method and algorithm; T.N. and S.H. performed the experiments; T.N. and S.H. wrote the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the JSPS Grant-in-Aid for Early-Career Scientists Grant Number JP19K14597 and the JSPS Grant-in-Aid for Young Scientists (B) Grant Number JP17K14233.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing not applicable.

Acknowledgments

The authors are grateful to the referees for their valuable comments and suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Some Derivative Functions

We now put ( x ) = log f θ ( x ) , i ( x ) = i log f θ ( x ) , i j ( x ) = i j log f θ ( x ) , and i j k ( x ) = i j k log f θ ( x ) and let a norm · p : L p ( Ω ) R be defined by:
h p = Ω | h ( x ) | p d x 1 / p .
We then obtain derivative functions of q ( α ) ( x j ; θ ) with respect to θ as follows:
i q ( α ) ( x ; θ ) = f θ ( x ) α i ( x ) Ω f θ ( y ) α + 1 i ( y ) d y , i j q ( α ) ( x ; θ ) = f θ ( x ) α α i ( x ) j ( x ) + i j ( x ) Ω f θ ( y ) α + 1 ( α + 1 ) i ( y ) i ( y ) + i j ( y ) d y , i j k q ( α ) ( x ; θ ) = f θ ( x ) α α 2 i ( x ) j ( x ) k ( x ) + i j k ( x ) α k ( x ) i j ( x ) + i ( x ) j k ( x ) + j ( x ) i k ( x ) Ω f θ ( y ) α + 1 ( α + 1 ) 2 i ( y ) j ( y ) k ( y ) + i j k ( y ) + ( α + 1 ) k ( y ) i j ( y ) + i ( y ) j k ( y ) + j ( y ) i k ( y ) d y .
Similarly, we obtain derivative functions of q ( γ ) ( x j ; θ ) as follows:
i q ( γ ) ( x ; θ ) = f θ ( x ) γ f θ γ + 1 γ i ( x ) f θ ( x ) γ f θ γ + 1 1 + 2 γ Ω f θ ( y ) γ + 1 i ( y ) d y , i j q ( γ ) ( x j ; θ ) = f θ ( x ) γ f θ γ + 1 γ γ i ( x ) j ( x ) + i j ( x ) γ f θ ( x ) γ f θ γ + 1 1 + 2 γ j ( x ) Ω f θ ( y ) γ + 1 i ( y ) d y + i ( x ) Ω f θ ( y ) γ + 1 j ( y ) d y + ( 1 + 2 γ ) f θ ( x ) γ f θ γ + 1 2 + 3 γ Ω f θ ( y ) γ + 1 i ( y ) d y Ω f θ ( y ) γ + 1 j ( y ) d y f θ ( x ) γ f θ γ + 1 1 + 2 γ Ω f θ ( y ) γ + 1 s i j ( y ) d y , i j k q ( γ ) ( x ; θ ) = f θ ( x ) γ f θ γ + 1 γ γ 2 i ( x ) j ( x ) k ( x ) + i j k ( x ) + γ f θ ( x ) γ f θ γ + 1 γ k ( x ) i j ( x ) + j ( x ) i k ( x ) + i ( x ) j k ( x ) f θ ( x ) γ f θ γ + 1 1 + 2 γ γ 2 j ( x ) k ( x ) + γ j k ( x ) Ω f θ ( y ) γ + 1 i ( y ) d y f θ ( x ) γ f θ γ + 1 1 + 2 γ γ 2 i ( x ) k ( x ) + γ i k ( x ) Ω f θ ( y ) γ + 1 j ( y ) d y f θ ( x ) γ f θ γ + 1 1 + 2 γ γ 2 i ( x ) j ( x ) + γ i j ( x ) Ω f θ ( y ) γ + 1 k ( y ) d y + ( 1 + γ ) ( 1 + 2 γ ) f θ ( x ) γ f θ γ + 1 2 + 3 γ k ( x ) Ω f θ ( y ) γ + 1 i ( y ) d y Ω f θ ( y ) γ + 1 j ( y ) d y + ( 1 + γ ) ( 1 + 2 γ ) f θ ( x ) γ f θ γ + 1 2 + 3 γ j ( x ) Ω f θ ( y ) γ + 1 i ( y ) d y Ω f θ ( y ) γ + 1 k ( y ) d y + ( 1 + γ ) ( 1 + 2 γ ) f θ ( x ) γ f θ γ + 1 2 + 3 γ i ( x ) Ω f θ ( y ) γ + 1 j ( y ) d y Ω f θ ( y ) γ + 1 k ( y ) d y γ f θ ( x ) γ f θ γ + 1 1 + 2 γ k ( x ) Ω f θ ( y ) γ + 1 s i j ( y ) d y γ f θ ( x ) γ f θ γ + 1 1 + 2 γ j ( x ) Ω f θ ( y ) γ + 1 s i k ( y ) d y γ f θ ( x ) γ f θ γ + 1 1 + 2 γ i ( x ) Ω f θ ( y ) γ + 1 s j k ( y ) d y ( 1 + 2 γ ) ( 2 + 3 γ ) f θ γ + 1 3 + 4 γ f θ ( x ) γ S i j k f θ ( x ) γ f θ γ + 1 1 + 2 γ Ω f θ ( y ) γ + 1 ( γ + 1 ) 2 i ( y ) j ( y ) k ( y ) + i j k ( y ) d y f θ ( x ) γ f θ γ + 1 1 + 2 γ Ω f θ ( y ) γ + 1 s i j k ( y ) d y ,
where:
s i j ( y ) = ( γ + 1 ) i ( y ) j ( y ) + i j ( y ) , s i j k ( y ) = ( γ + 1 ) { i j ( y ) k ( y ) + j ( y ) i k ( y ) + i ( y ) j k ( y ) } , S i j k = Ω f θ ( y ) γ + 1 i ( y ) d y Ω f θ ( y ) γ + 1 j ( y ) d y Ω f θ ( y ) γ + 1 k ( y ) d y .

References

  1. Huber, J.; Ronchetti, E.M. Robust Statistics, 2nd ed.; Wiley: Hoboken, NJ, USA, 2009. [Google Scholar]
  2. Basu, A.; Shioya, H.; Park, C. Statistical Inference: The Minimum Distance Approach; Chapman & Hall: Boca Raton, FL, USA, 2011. [Google Scholar]
  3. Basu, A.; Harris, I.R.; Hjort, N.L.; Jones, M. Robust and efficient estimation by minimising a density power divergence. Biometrika 1998, 85, 549–559. [Google Scholar] [CrossRef] [Green Version]
  4. Jones, M.; Hjort, N.L.; Harris, I.R.; Basu, A. A comparison of related density-based minimum divergence estimators. Biometrika 2001, 88, 865–873. [Google Scholar] [CrossRef]
  5. Fujisawa, H.; Eguchi, S. Robust parameter estimation with a small bias against heavy contamination. J. Multivar. Anal. 2008, 99, 2053–2081. [Google Scholar] [CrossRef] [Green Version]
  6. Hirose, K.; Fujisawa, H.; Sese, J. Robust sparse Gaussian graphical modeling. J. Multivar. Anal. 2016, 161, 172–190. [Google Scholar] [CrossRef]
  7. Kawashima, T.; Fujisawa, H. Robust and sparse regression via γ-divergence. Entropy 2017, 19, 608. [Google Scholar] [CrossRef] [Green Version]
  8. Hirose, K.; Masuda, H. Robust relative error estimation. Entropy 2018, 20, 632. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Bissiri, P.G.; Holmes, C.C.; Walker, S.G. A general framework for updating belief distributions. J. R. Stat. Soc. Ser. B Stat. Methodol. 2016, 78, 1103–1130. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  10. Hooker, G.; Vidyashankar, A.N. Bayesian model robustness via disparities. Test 2014, 23, 556–584. [Google Scholar] [CrossRef] [Green Version]
  11. Ghosh, A.; Basu, A. Robust Bayes estimation using the density power divergence. Ann. Inst. Stat. Math. 2016, 68, 413–437. [Google Scholar] [CrossRef]
  12. Nakagawa, T.; Hashimoto, S. Robust Bayesian inference via γ-divergence. Commun. Stat. Theory Methods 2020, 49, 343–360. [Google Scholar] [CrossRef]
  13. Jewson, J.; Smith, J.Q.; Holmes, C. Principles of Bayesian inference using general divergence criteria. Entropy 2018, 20, 442. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Hashimoto, S.; Sugasawa, S. Robust Bayesian regression with synthetic posterior distributions. Entropy 2020, 22, 661. [Google Scholar] [CrossRef] [PubMed]
  15. Bernardo, J.M. Reference posterior distributions for Bayesian inference. J. R. Stat. Soc. Ser. B Methodol. 1979, 41, 113–128. [Google Scholar] [CrossRef]
  16. Ghosh, M.; Liu, R. Moment matching priors. Sankhya A 2011, 73, 185–201. [Google Scholar] [CrossRef]
  17. Giummolè, F.; Mameli, V.; Ruli, E.; Ventura, L. Objective Bayesian inference with proper scoring rules. Test 2019, 28, 728–755. [Google Scholar] [CrossRef] [Green Version]
  18. Kanamori, T.; Fujisawa, H. Affine invariant divergences associated with proper composite scoring rules and their applications. Bernoulli 2014, 20, 2278–2304. [Google Scholar] [CrossRef]
  19. Ghosh, M.; Mergel, V.; Liu, R. A general divergence criterion for prior selection. Ann. Inst. Stat. Math. 2011, 63, 43–58. [Google Scholar] [CrossRef]
  20. Liu, R.; Chakrabarti, A.; Samanta, T.; Ghosh, J.K.; Ghosh, M. On divergence measures leading to Jeffreys and other reference priors. Bayesian Anal. 2014, 9, 331–370. [Google Scholar] [CrossRef]
  21. Hashimoto, S. Reference priors via α-divergence for a certain non-regular model in the presence of a nuisance parameter. J. Stat. Plan. Inference 2021, 213, 162–178. [Google Scholar] [CrossRef]
  22. Hashimoto, S. Moment matching priors for non-regular models. J. Stat. Plan. Inference 2019, 203, 169–177. [Google Scholar] [CrossRef]
  23. Robert, C.P.; Casella, G. Monte Carlo Statistical Methods; Springer: Berlin/Heidelberg, Germany, 2004. [Google Scholar]
  24. Serfling, R. Approximation Theorems of Mathematical Statistics; Wiley: Hoboken, NJ, USA, 1980. [Google Scholar]
  25. Warwick, J.; Jones, M. Choosing a robustness tuning parameter. J. Stat. Comput. Simul. 2005, 75, 581–588. [Google Scholar] [CrossRef]
  26. Sugasawa, S. Robust empirical Bayes small area estimation with density power divergence. Biometrika 2020, 107, 467–480. [Google Scholar] [CrossRef]
  27. Basak, S.; Basu, A.; Jones, M. On the ‘optimal’ density power divergence tuning parameter. J. Appl. Stat. 2020, 1–21. [Google Scholar] [CrossRef]
  28. Kanamori, T.; Fujisawa, H. Robust estimation under heavy contamination using unnormalized models. Biometrika 2015, 102, 559–572. [Google Scholar] [CrossRef]
Figure 1. The horizontal axis is the contamination ratio ε . The red lines show the empirical bias and MSE of the γ -posterior means under the three priors when n = 100 and ν = 6 . Similarly, the blue and green lines show that of the R ( α ) -posterior and ordinary posterior means, respectively. The uniform, reference, and moment matching priors are denoted by “Uni”, “Ref”, and “MM”, respectively.
Figure 1. The horizontal axis is the contamination ratio ε . The red lines show the empirical bias and MSE of the γ -posterior means under the three priors when n = 100 and ν = 6 . Similarly, the blue and green lines show that of the R ( α ) -posterior and ordinary posterior means, respectively. The uniform, reference, and moment matching priors are denoted by “Uni”, “Ref”, and “MM”, respectively.
Entropy 23 00029 g001
Figure 2. The horizontal axis is the location parameter ν of the contamination distribution. The red lines show the empirical bias and MSE of the γ -posterior means under the three priors when n = 100 and ε = 0.20 . Similarly, the blue and green lines show that of the R ( α ) -posterior and ordinary posterior means, respectively. The uniform, reference, and moment matching priors are denoted by “Uni”, “Ref”, and “MM”, respectively.
Figure 2. The horizontal axis is the location parameter ν of the contamination distribution. The red lines show the empirical bias and MSE of the γ -posterior means under the three priors when n = 100 and ε = 0.20 . Similarly, the blue and green lines show that of the R ( α ) -posterior and ordinary posterior means, respectively. The uniform, reference, and moment matching priors are denoted by “Uni”, “Ref”, and “MM”, respectively.
Entropy 23 00029 g002
Figure 3. The curve of the asymptotic relative efficiency for normal mean and variance estimation under the γ -posterior.
Figure 3. The curve of the asymptotic relative efficiency for normal mean and variance estimation under the γ -posterior.
Entropy 23 00029 g003
Table 1. Empirical biases of the posterior means for μ .
Table 1. Empirical biases of the posterior means for μ .
ε n Bayes R ( α ) -Posterior γ -Posterior
α , γ 0.0 α = 0.2 α = 0.3 α = 0.5 α = 0.7 γ = 0.2 γ = 0.3 γ = 0.5 γ = 0.7
Uniform prior
0.00 20 0.002 0.003 0.003 0.002 0.001 0.003 0.003 0.003 0.002
0.00 50 0.002 0.001 0.001 0.001 0.000 0.001 0.001 0.001 0.000
0.00 100 0.000 0.000 0.000 0.000 0.001 0.000 0.000 0.000 0.001
0.05 20 0.298 0.109 0.075 0.098 0.172 0.104 0.064 0.046 0.060
0.05 50 0.301 0.053 0.020 0.009 0.016 0.051 0.017 0.004 0.002
0.05 100 0.301 0.038 0.012 0.004 0.002 0.036 0.011 0.003 0.001
0.20 20 1.192 0.917 0.800 0.815 0.973 0.908 0.755 0.596 0.615
0.20 50 1.198 0.869 0.638 0.362 0.478 0.864 0.600 0.215 0.112
0.20 100 1.201 0.862 0.578 0.158 0.108 0.859 0.537 0.065 0.015
Reference prior
0.00 20 0.002 0.003 0.004 0.004 0.003 0.003 0.004 0.004 0.004
0.00 50 0.002 0.001 0.001 0.001 0.000 0.001 0.001 0.001 0.000
0.00 100 0.000 0.000 0.000 0.000 0.001 0.000 0.000 0.000 0.001
0.05 20 0.298 0.072 0.033 0.016 0.018 0.070 0.030 0.010 0.006
0.05 50 0.301 0.041 0.013 0.002 0.001 0.040 0.011 0.001 0.001
0.05 100 0.301 0.033 0.010 0.003 0.001 0.032 0.009 0.002 0.001
0.20 20 1.192 0.808 0.558 0.295 0.293 0.803 0.537 0.227 0.152
0.20 50 1.198 0.820 0.504 0.143 0.079 0.817 0.473 0.085 0.023
0.20 100 1.201 0.838 0.495 0.071 0.027 0.836 0.457 0.029 0.006
Moment matching prior
0.00 20 0.002 0.003 0.004 0.004 0.003 0.003 0.004 0.004 0.004
0.00 50 0.002 0.001 0.001 0.001 0.000 0.001 0.001 0.001 0.000
0.00 100 0.000 0.000 0.000 0.001 0.001 0.000 0.000 0.000 0.001
0.05 20 0.298 0.059 0.025 0.010 0.008 0.059 0.024 0.009 0.007
0.05 50 0.301 0.037 0.011 0.002 0.001 0.036 0.010 0.001 0.001
0.05 100 0.301 0.031 0.009 0.002 0.001 0.030 0.009 0.002 0.001
0.20 20 1.192 0.759 0.486 0.220 0.196 0.759 0.481 0.210 0.165
0.20 50 1.198 0.799 0.462 0.111 0.043 0.797 0.441 0.079 0.025
0.20 100 1.201 0.828 0.468 0.058 0.018 0.827 0.435 0.028 0.006
Table 2. Empirical biases of the posterior means for σ .
Table 2. Empirical biases of the posterior means for σ .
ε n Bayes R ( α ) -Posterior γ -Posterior
α , γ 0.0 α = 0.2 α = 0.3 α = 0.5 α = 0.7 γ = 0.2 γ = 0.3 γ = 0.5 γ = 0.7
Uniform prior
0.00 20 0.058 0.148 0.225 0.733 2.089 0.136 0.184 0.330 0.620
0.00 50 0.022 0.049 0.067 0.122 0.263 0.046 0.058 0.085 0.116
0.00 100 0.011 0.024 0.031 0.053 0.088 0.022 0.028 0.039 0.051
0.05 20 0.669 0.438 0.476 1.620 4.335 0.404 0.370 0.540 1.109
0.05 50 0.660 0.203 0.144 0.188 0.475 0.189 0.116 0.110 0.139
0.05 100 0.652 0.134 0.078 0.087 0.135 0.123 0.061 0.049 0.058
0.20 20 1.732 1.848 2.086 5.500 9.627 1.769 1.727 2.207 3.833
0.20 50 1.653 1.558 1.304 1.098 3.158 1.533 1.182 0.573 0.454
0.20 100 1.626 1.508 1.151 0.506 0.563 1.495 1.042 0.198 0.113
Reference prior
0.00 20 0.001 0.009 0.006 0.007 0.013 0.007 0.001 0.041 0.117
0.00 50 0.000 0.003 0.002 0.004 0.010 0.003 0.000 0.012 0.036
0.00 100 0.000 0.002 0.001 0.002 0.006 0.002 0.000 0.005 0.016
0.05 20 0.576 0.173 0.093 0.066 0.097 0.161 0.069 0.000 0.051
0.05 50 0.625 0.119 0.050 0.028 0.029 0.110 0.035 0.003 0.030
0.05 100 0.635 0.096 0.039 0.024 0.026 0.088 0.026 0.000 0.014
0.20 20 1.580 1.281 0.954 0.659 0.697 1.258 0.877 0.427 0.303
0.20 50 1.598 1.367 0.917 0.375 0.324 1.354 0.832 0.181 0.071
0.20 100 1.599 1.421 0.937 0.241 0.196 1.413 0.839 0.068 0.014
Moment matching prior
0.00 20 0.039 0.036 0.044 0.083 0.186 0.034 0.039 0.061 0.090
0.00 50 0.015 0.014 0.016 0.029 0.067 0.013 0.014 0.019 0.027
0.00 100 0.007 0.006 0.007 0.014 0.032 0.006 0.006 0.008 0.012
0.05 20 0.516 0.093 0.021 0.029 0.113 0.089 0.016 0.021 0.023
0.05 50 0.601 0.089 0.026 0.002 0.037 0.083 0.017 0.011 0.021
0.05 100 0.623 0.082 0.027 0.010 0.005 0.075 0.017 0.003 0.010
0.20 20 1.481 1.097 0.736 0.395 0.225 1.094 0.717 0.373 0.361
0.20 50 1.559 1.293 0.808 0.276 0.165 1.287 0.748 0.162 0.084
0.20 100 1.579 1.386 0.872 0.197 0.135 1.381 0.787 0.061 0.019
Table 3. Empirical MSEs of the posterior means for μ .
Table 3. Empirical MSEs of the posterior means for μ .
ε n Bayes R ( α ) -Posterior γ -Posterior
α , γ 0.0 α = 0.2 α = 0.3 α = 0.5 α = 0.7 γ = 0.2 γ = 0.3 γ = 0.5 γ = 0.7
Uniform prior
0.00 20 0.050 0.051 0.053 0.090 0.282 0.051 0.053 0.057 0.078
0.00 50 0.020 0.021 0.022 0.023 0.027 0.021 0.022 0.023 0.025
0.00 100 0.010 0.010 0.011 0.012 0.013 0.010 0.011 0.012 0.013
0.05 20 0.223 0.098 0.081 0.280 1.081 0.096 0.075 0.076 0.159
0.05 50 0.144 0.031 0.025 0.025 0.039 0.031 0.025 0.025 0.027
0.05 100 0.118 0.015 0.012 0.013 0.013 0.014 0.012 0.013 0.014
0.20 20 1.761 1.267 1.127 2.296 4.781 1.254 1.031 0.906 1.402
0.20 50 1.571 0.950 0.647 0.311 0.879 0.944 0.613 0.188 0.088
0.20 100 1.509 0.844 0.494 0.095 0.052 0.840 0.463 0.046 0.019
Reference prior
0.00 20 0.050 0.052 0.054 0.062 0.077 0.052 0.054 0.063 0.076
0.00 50 0.020 0.021 0.022 0.024 0.027 0.021 0.022 0.024 0.028
0.00 100 0.010 0.010 0.011 0.012 0.013 0.010 0.011 0.012 0.014
0.05 20 0.223 0.080 0.065 0.067 0.086 0.080 0.064 0.077 0.066
0.05 50 0.144 0.028 0.024 0.026 0.028 0.028 0.024 0.030 0.026
0.05 100 0.118 0.014 0.012 0.013 0.014 0.014 0.012 0.015 0.013
0.20 20 1.761 1.106 0.744 0.385 0.564 1.104 0.727 0.304 0.280
0.20 50 1.571 0.881 0.497 0.111 0.057 0.879 0.477 0.082 0.042
0.20 100 1.509 0.809 0.410 0.041 0.020 0.807 0.385 0.026 0.019
Moment matching prior
0.00 20 0.050 0.052 0.055 0.064 0.080 0.052 0.055 0.063 0.074
0.00 50 0.020 0.021 0.022 0.025 0.028 0.021 0.022 0.025 0.028
0.00 100 0.010 0.010 0.011 0.012 0.014 0.010 0.011 0.012 0.014
0.05 20 0.223 0.075 0.063 0.067 0.085 0.075 0.063 0.067 0.076
0.05 50 0.144 0.028 0.024 0.026 0.030 0.028 0.024 0.026 0.029
0.05 100 0.118 0.014 0.012 0.013 0.014 0.014 0.012 0.013 0.015
0.20 20 1.761 1.039 0.648 0.295 0.394 1.043 0.655 0.286 0.290
0.20 50 1.571 0.852 0.453 0.088 0.044 0.853 0.443 0.078 0.043
0.20 100 1.509 0.794 0.385 0.034 0.018 0.794 0.365 0.025 0.018
Table 4. Empirical MSEs of the posterior means for σ .
Table 4. Empirical MSEs of the posterior means for σ .
ε n Bayes R ( α ) -Posterior γ -Posterior
α , γ 0.0 α = 0.2 α = 0.3 α = 0.5 α = 0.7 γ = 0.2 γ = 0.3 γ = 0.5 γ = 0.7
Uniform prior
0.00 20 0.033 0.062 0.104 1.110 8.455 0.057 0.080 0.195 0.747
0.00 50 0.011 0.015 0.019 0.034 0.161 0.015 0.017 0.025 0.036
0.00 100 0.005 0.006 0.007 0.011 0.018 0.006 0.007 0.009 0.012
0.05 20 0.761 0.424 0.471 7.528 37.358 0.379 0.309 0.673 3.370
0.05 50 0.553 0.095 0.051 0.066 0.950 0.087 0.040 0.035 0.047
0.05 100 0.482 0.039 0.017 0.018 0.031 0.035 0.014 0.012 0.014
0.20 20 3.262 4.185 5.830 55.081 138.264 3.874 4.117 8.080 29.181
0.20 50 2.816 2.706 2.229 1.895 27.513 2.638 1.962 0.741 0.454
0.20 100 2.682 2.405 1.704 0.483 0.506 2.372 1.526 0.146 0.038
Reference prior
0.00 20 0.027 0.030 0.033 0.040 0.059 0.030 0.032 0.041 0.058
0.00 50 0.010 0.011 0.012 0.015 0.017 0.011 0.012 0.015 0.020
0.00 100 0.005 0.006 0.006 0.007 0.008 0.006 0.006 0.007 0.009
0.05 20 0.611 0.153 0.083 0.068 0.101 0.145 0.073 0.050 0.054
0.05 50 0.504 0.054 0.023 0.019 0.021 0.051 0.021 0.017 0.021
0.05 100 0.459 0.027 0.011 0.009 0.010 0.025 0.010 0.008 0.010
0.20 20 2.731 2.283 1.624 0.941 0.982 2.232 1.482 0.548 0.304
0.20 50 2.633 2.165 1.330 0.341 0.215 2.140 1.218 0.171 0.048
0.20 100 2.595 2.158 1.268 0.144 0.070 2.143 1.144 0.046 0.014
Moment matching prior
0.00 20 0.026 0.028 0.031 0.040 0.063 0.028 0.032 0.042 0.054
0.00 50 0.010 0.011 0.012 0.015 0.019 0.011 0.012 0.015 0.020
0.00 100 0.005 0.006 0.006 0.007 0.009 0.006 0.006 0.007 0.009
0.05 20 0.525 0.105 0.058 0.046 0.052 0.104 0.057 0.048 0.056
0.05 50 0.470 0.043 0.020 0.017 0.018 0.041 0.019 0.017 0.021
0.05 100 0.443 0.023 0.010 0.008 0.009 0.022 0.009 0.008 0.010
0.20 20 2.411 1.809 1.132 0.441 0.186 1.816 1.137 0.461 0.385
0.20 50 2.507 1.974 1.120 0.222 0.082 1.971 1.065 0.153 0.054
0.20 100 2.532 2.065 1.148 0.106 0.040 2.059 1.054 0.043 0.015
Table 5. The value of γ and the corresponding asymptotic relative efficiency.
Table 5. The value of γ and the corresponding asymptotic relative efficiency.
γ 0.01 0.1 0.3 0.5
ARE0.9514890.62221890.27318710.1359501
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Nakagawa, T.; Hashimoto, S. On Default Priors for Robust Bayesian Estimation with Divergences. Entropy 2021, 23, 29. https://doi.org/10.3390/e23010029

AMA Style

Nakagawa T, Hashimoto S. On Default Priors for Robust Bayesian Estimation with Divergences. Entropy. 2021; 23(1):29. https://doi.org/10.3390/e23010029

Chicago/Turabian Style

Nakagawa, Tomoyuki, and Shintaro Hashimoto. 2021. "On Default Priors for Robust Bayesian Estimation with Divergences" Entropy 23, no. 1: 29. https://doi.org/10.3390/e23010029

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop