Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Preprint
Article

MS 584 Expansions for the Conditional Distribution and Density of a Standard Estimate

Altmetrics

Downloads

63

Views

16

Comments

0

This version is not peer-reviewed

Submitted:

26 June 2024

Posted:

28 June 2024

You are already at the latest version

Alerts
Abstract
A good approximation for the distribution of an estimate, is vital for statistical inference. Here we give Edgeworth expansions for the conditional density and the conditional distribution of any multivariate standard estimate.
Keywords: 
Subject: Computer Science and Mathematics  -   Probability and Statistics

1. Introduction and Summary

Suppose that we have a non-lattice estimate w ^ of an unknown parameter w R q of a statistical model, based on a sample of size n. The distribution of a standard estimate is determined by the coefficients obtained by expanding its cumulants in powers of n 1 . In §2 we summarise the extended Edgeworth-Cornish-Fisher expansions of Withers (1984) for w ^ when q = 1 . Then we give the multivariate Edgeworth expansions to O ( n 2 ) . We show that the distribution of X n = n 1 / 2 ( w ^ w ) has the form P r o b . ( X n x ) r = 0 n r / 2 P r ( x ) where P 0 ( x ) = Φ V ( x ) , the normal distribution with V = ( k 1 i 1 i 2 ) , and for r 1 , P r ( x ) = k = 1 3 r [ P r k ( x ) : k r e v e n ] where P r k ( x ) has q k terms, reducible using symmetry. Its density has a similarly form. We argue that these expansions may be valid even if q = q n if q n / n 1 / 6 is bounded.
§3 gives these expansions in complete detail when q = 2 .
In §4 we suppose that q 2 and partition X n as X n 1 X n 2 of dimensions q 1 1 and q 2 1 . We derive expansions for the conditional density and distribution of X n 1 given X n 2 to O ( n 2 ) . §5 specialises to bivariate estimates.
§6 gives the extended Cornish-Fisher expansions for the quantiles of the conditional distribution when q 1 = 1 . An example is the distribution of a sample mean given the sample variance.

2. Extended Edgeworth-Cornish-Fisher theory

Univariate estimates. Suppose that w ^ is a standard estimate of w R with respect to n, typically the sample size. That is, E w ^ w as n , and its rth cumulant can be expanded as
κ r ( w ^ ) j = r 1 n j a r j f o r r 1 ,
where the cumulant coefficients  a r j may depend on n but are bounded as n , and a 21 is bounded away from 0. Here and below ≈ indicates an asymptotic expansion that need not converge. So (1) holds in the sense that
κ r ( w ^ ) = j = r 1 I 1 n j a r j + O ( n I ) f o r I r 1 ,
where y n = O ( x n ) means that y n / x n is bounded in n. Withers (1984) extended Cornish and Fisher (1937) and Fisher and Cornish (1960) to give the distribution and quantiles of
Y n = ( n / a 21 ) 1 / 2 ( w ^ w )
have asymptotic expansions in powers of n 1 / 2 :
P n ( x ) = P r o b ( Y n x ) Φ ( x ) ϕ ( x ) r = 1 n r / 2 h r ( x ) ,
Preprints 110474 i001
Preprints 110474 i002
where Φ ( x ) = P r o b ( N x ) , N N ( 0 , 1 ) is a unit normal random variable with density ϕ ( x ) = ( 2 π ) 1 / 2 e x 2 / 2 , and h r ( x ) , h ¯ r ( x ) , f r ( x ) , g r ( x ) are polynomials in x and the standardized cumulant coefficients
Preprints 110474 i003
and so on, where H k is the kth Hermite polynomial,
Preprints 110474 i004
See Withers (1984) for g r , r 6 , , Withers (2000) for (6) and §6 for relations between h r , f r , g r . Also,
ln [ p n ( x ) / ϕ ( x ) ] r = 1 n r / 2 b r ( x ) w h e r e b 1 ( x ) = h 1 ( x ) , b 2 ( x ) = A 11 2 / 2 + ( A 22 A 32 A 11 ) H 2 / 2 A 32 2 ( 3 x 4 12 x 2 + 5 ) / 24 + A 43 H 4 / 24 .
For r > 1 , b r ( x ) is a polynomial of order only r + 2 , while h ¯ r ( x ) is of order 3 r .
The original Edgeworth expansion was for w ^ the mean of n independent identically distributed random variables from a distribution with rth cumulant κ r . So (1 ) holds with a r i = κ r I ( i = r 1 ) , and other a r i = 0 . An explicit formula for its general term was given in Withers and Nadarajah (2009) using Bell polynomials.
Ordinary Bell polynomials. For a sequence e = ( e 1 , e 2 , ) ,  the partial ordinary Bell polynomial  B ˜ r s = B ˜ r s ( e ) , is defined by the identity
f o r s 0 , S s = r = s z r B ˜ r s ( e ) w h e r e S = r = 1 z r e r , z R .
Preprints 110474 i005
where δ 00 = 1 , δ r 0 = 0 for r 0 . They are tabled on p309 of Comtet (1974). The complete ordinary Bell polynomial, B ˜ r ( e ) is defined in terms of S by
e S = r = 0 z r B ˜ r ( e ) . S o B ˜ 0 ( e ) = 1 a n d f o r r 1 , B ˜ r ( e ) = s = 1 r B ˜ r s ( e ) / s ! :
Preprints 110474 i006
Multivariate estimates. Suppose that w ^ is a standard estimate of
w R p with respect to n. That is, E w ^ w as n , and for r 1 ,
1 i 1 , , i r p , the rth order cumulants of w ^ can be expanded as
k ¯ 1 r = κ ( w ^ i 1 , , w ^ i r ) = d = r 1 k ¯ d 1 r n d , k ¯ d 1 r = k d i 1 i r ,
where the cumulant coefficients  k ¯ d 1 r = k d i 1 i r may depend on n but are bounded as n . So the bar replaces i j by j: k ¯ 0 1 = w i 1 , k ¯ 1 12 = k 1 i 1 i 2 .
X n = n 1 / 2 ( w ^ w ) L N p ( 0 , V ) f o r V = ( k 1 i 1 i 2 ) , p × p ,
with density and distribution
ϕ V ( x ) = ( 2 π ) q / 2 ( d e t ( V ) ) 1 / 2 exp ( x V 1 x / 2 ) , Φ V ( x ) = x ϕ V ( x ) d x .
V may depend on n, but we assume that d e t ( V ) is bounded away from 0. Set
t ¯ r = t i r , b ¯ 2 d 1 r = k ¯ d 1 r , b ¯ 2 d + 1 1 r = 0 , e j ( t ) = r = 1 j + 2 b ¯ r + j 1 r t ¯ 1 t ¯ r / r ! . S o e 1 = k ¯ 1 1 t ¯ 1 + k ¯ 2 1 3 t ¯ 1 t ¯ 2 t ¯ 3 / 6 , e 2 = k ¯ 2 12 t ¯ 1 t ¯ 2 / 2 + k ¯ 3 1 4 t ¯ 1 t ¯ 4 / 24 .
Preprints 110474 i007
where for r 3 , P ¯ r 1 k is a function of k ¯ e 1 r given for the 1st time in the appendix. In (13), (14) and below, we use the tensor summation convention of implicitly summing i 1 , , i r over their range 1 , , q . We make P ¯ r 1 k symmetric in i 1 , , i k using the operator S that symmetrizes over i 1 , , i k :
P ¯ 1 1 = k ¯ 1 1 , P ¯ 1 1 3 = k ¯ 2 1 3 / 6 , P ¯ 2 12 = k ¯ 2 12 / 2 + k 1 1 k ¯ 1 2 / 2 ,
Preprints 110474 i008
P ¯ 3 1 5 = k ¯ 4 1 5 / 120 + S k ¯ 3 1 4 k ¯ 1 5 / 24 + S k ¯ 2 12 k ¯ 2 3 5 / 12 + S k ¯ 2 123 k ¯ 1 4 k ¯ 1 5 / 12 , P ¯ 3 1 7 = S k ¯ 2 123 k ¯ 3 4 7 / 144 + S k ¯ 2 123 k ¯ 2 4 6 k ¯ 1 7 / 72 , P ¯ 3 1 9 = S k ¯ 2 1 3 k ¯ 2 4 6 k ¯ 2 7 9 / 6 3 .
The terms involving S are given in the Appendix A. By Withers and Nadarajah (2010b) or Withers (2024), X n has distribution and density
P r o b . ( X n x ) r = 0 n r / 2 P r ( x ) , p X n ( x ) r = 0 n r / 2 p r ( x ) , x R p ,
Preprints 110474 i009
Preprints 110474 i010
Preprints 110474 i011
Preprints 110474 i012
Preprints 110474 i013
is the multivariate Hermite polynomial. For their dual form see Withers and Nadarajah (2014). By Withers (2020), for i = 1 ,
H ¯ 1 k = E Π j = 1 k ( y ¯ j + i Y ¯ j ) w h e r e y ¯ j = y i j , Y ¯ j = Y i j , y = V 1 x , Y N p ( 0 , V 1 ) . S o , H 1 = y 1 , H ¯ 1 = y ¯ 1 , H 12 = y 1 y 2 V 12 , H ¯ 12 = y ¯ 1 y ¯ 2 V ¯ 12 , H 1 3 = y 1 y 2 y 3 3 V 12 y 3 , 3 V 12 y 3 = V 12 y 3 + V 13 y 2 + V 23 y 1 , H 1 4 = y 1 y 4 6 V 12 y 3 y 4 + 3 V 12 V 34 , H 1 5 = y 1 y 5 10 V 12 y 3 y 5 + 5 y 5 3 V 12 V 34 , H 1 6 = y 1 y 6 15 V 12 y 3 y 6 + 15 y 5 y 6 3 V 12 V 34 45 V 12 V 34 V 56 ,
where V i 1 i 2 is the ( i 1 , i 2 ) element of V 1 , and V ¯ j 1 j 2 is the ( i j 1 , i j 2 ) element of V 1 . This gives H ¯ 1 k in terms of the moments of Y. For example
P 1 ( x ) = e 1 ( / x ) ) Φ V ( x ) = r = 1 3 b ¯ r + 1 1 r O ¯ 1 r Φ V ( x ) / r ! = k = 1 , 3 P 1 k ( x ) , P 11 ( x ) = k ¯ 1 1 ( ¯ 1 ) Φ V ( x ) , P 13 ( x ) = k ¯ 2 1 3 O ¯ 1 3 Φ V ( x ) / 6 , p 1 ( x ) = k ¯ 1 1 ( ¯ 1 ) ϕ V ( x ) + k ¯ 2 1 3 O ¯ 1 3 ϕ V ( x ) / 6 ,
Preprints 110474 i014
Preprints 110474 i015
Preprints 110474 i016
This gives the Edgeworth expansion for the distribution of Y n to O ( n 2 ) . See Withers (2024) for more terms.
For large q, P r k , p ˜ r k of (21) and (23) have q 3 r terms. So if q = q n and M r n = m a x k | P ¯ r 1 k | , then n r / 2 ( P r ( x ) , p r ( x ) M r n ν n r where ν n = n 1 / 2 q n 3 . So if for example M r n is bounded, then the Edgeworth series should converge if q n / n 1 / 6 0 .
The log density can be expanded as
ln [ p Y n ( x ) / ϕ V ( x ) ] r = 1 n r / 2 b r ( x ) .
S o b y ( 18 ) , p ˜ r ( x ) = B ˜ r ( b ( x ) ) w h e r e b = ( b 2 , b 2 , ) .
See Withers and Nadarajah (2016). Also for H k ( x ) of (6),
H j k = E ( y j + i Y j ) k = τ j k H k ( τ j 1 y j ) w h e r e τ j = ( V j j ) 1 / 2 : H j = y j , H j 2 = y j 2 V j j , H j 3 = y j 3 3 V j j y j , H j 4 = y j 4 6 V j j y j 2 + 3 ( V j j ) 2 .
Preprints 110474 i017
Example 1. 
Let w ^ be a sample mean. Then E w ^ = w , and only the leading coefficient in (11) are non-zero. So k ¯ 1 1 = P ¯ 1 1 = p ˜ 11 = 0 . In order needed, the non-zero P r 1 k are
P ¯ 1 1 3 = k ¯ 2 1 3 / 3 ! , P ¯ 2 1 4 = k ¯ 3 1 4 / 4 ! , P ¯ 2 1 6 o f ( 16 ) , P ¯ 3 1 5 = k ¯ 4 1 5 / 5 ! , P ¯ 3 1 7 = S k ¯ 2 123 k ¯ 3 4 7 / 144 , P ¯ 3 1 9 o f ( 17 ) .
p ˜ r k and P r k have q k terms but many are duplicates. We now show how symmetry reduces this to q k terms. We use the multinomial coefficient k a b = k ! / a ! b ! . For example 3 111 = 6 .
Set T r i 1 i k = P ¯ r 1 k H ¯ 1 k where tensor summation is not used. By (22),
p ˜ r k = i 1 , , i k = 1 q T r i 1 i k . S o , p ˜ r 1 = i 1 = 1 q T r i 1 , p ˜ r 2 = i 1 = 1 q T r i 1 i 1 + 2 i 1 > i 2 q 2 T r i 1 i 2 , p ˜ r 3 = i 1 = 1 q T r i 1 i 1 i 1 + 3 1 i 1 i 2 q ( q 1 ) T r i 1 i 1 i 2 + 3 111 i 1 > i 2 > i 3 q 3 T r i 1 i 2 i 3 , p ˜ r 4 = i 1 = 1 q T r i 1 i 1 i 1 i 1 + 4 1 i 1 i 2 q ( q 1 ) T r i 1 i 1 i 1 i 2 + 4 2 i 1 > i 2 q 2 T r i 1 i 1 i 2 i 2 + 4 211 i 1 > i 2 q q 2 T r i 1 i 1 i 2 i 3 + 4 ! i 1 > i 2 > i 3 > i 4 q 4 T r i 1 i 2 i 3 i 4 , p ˜ r 6 = i 1 = 1 q T r i 1 6 + 6 1 i 1 i 2 q ( q 1 ) T r i 1 5 i 2 + 6 2 i 1 i 2 q ( q 1 ) T r i 1 4 i 2 2 + 6 3 i 1 > i 2 q 2 T r i 1 3 i 2 3 + 6 411 3 q 3 T r i 1 4 i 2 i 3 + 6 321 6 q 3 T r i 1 3 i 2 2 i 3 + 6 222 i 1 > i 2 > i 3 q 3 T r i 1 2 i 2 2 i 3 2 + 6 3111 i 2 > i 3 > i 4 4 q 4 T r i 1 3 i 2 i 3 i 4 + 6 2211 i 1 > i 2 , i 3 > i 4 6 q 4 T r i 1 2 i 2 2 i 3 i 4 + 6 2 1111 i 2 > i 3 > i 4 > i 5 4 q 4 T r i 1 2 i 2 i 3 i 4 i 5 + 6 ! i 1 > > i 6 q 6 T r i 1 i 6 ,
where all i j are distinct. Similarly we can write out p ˜ r k for k = 5 , 7 , 9 . This reduces the number of terms in p ˜ r k from q k to q + q 2 for k = 2 , to q + q ( q 1 ) + q 3 for k = 3 , to q + q ( q 1 ) + q 2 + q q 2 + q 4 for k = 4 , and to q + 5 q 2 + 10 q 3 + 14 q 4 + q 6 for k = 6 .
If we reinterpret T r i 1 i k as P ¯ r 1 k ( ¯ 1 ) ( ¯ k ) Φ V ( x ) , where again tensor summation is not used, then we can reinterpret the above expression for p ˜ r k , as an expression for P r k ( x ) . For example,
P r 2 ( x ) = i 1 = 1 q P r i 1 i 1 ( i 1 ) 2 Φ V ( x ) + 2 i 1 > i 2 q 2 P r i 1 i 2 ( i 1 ) ( i 2 ) Φ V ( x ) .
These results can be extended to Type B estimates, that is to w ^ with cumulant expansions not of type (11), but
κ ( w ^ i 1 , , w ^ i r ) d = 2 r 2 b d i 1 i r n d / 2 .

3. The Distribution of n 1 / 2 ( w ^ w ) for q = 2

We first give p ˜ r k of (22) for r 3 , and then P r k ( x ) of (21).
p ˜ r k = b = 0 k P r ( 1 k b 2 b ) H k b , b w h e r e P r ( 1 a 2 b ) = a + b a P r 1 a 2 b ,
for P ¯ r 1 k of (16), (17), where we use the dual notation,
H a b = H 1 a 2 b = E ( y 1 + i Y 1 ) a ( y 2 + i Y 2 ) b
Preprints 110474 i018
So H k 0 and H 0 k are given by (29) with j = 1 and 2,
H 11 = y 1 y 2 V 12 , H 21 = y 1 2 y 2 V 11 y 2 2 V 12 y 1 , H 12 = y 1 y 2 2 V 22 y 1 2 V 12 y 2 .
For more examples see Withers (2000). P r ( 1 b 2 a ) is just P r ( 1 a 2 b ) with 1 and 2 reversed. The other P r ( 1 a 2 b ) needed in (31) for p ˜ r k are as follows.
F o r p ˜ 11 , P 1 ( j ) = k 1 j . F o r p ˜ 13 , P 1 ( j 3 ) = k 2 j j j / 6 , P 1 ( 1 2 2 ) = k 2 112 / 2 . F o r p ˜ 31 , P 3 ( j ) = k 2 j . F o r p ˜ 33 , P 3 ( j 3 ) = k 3 j 3 / 6 + k 2 j j k 1 j / 2 + ( k 1 j ) 3 / 6 , P 3 ( 1 2 2 ) = [ k 3 112 + 2 k 1 1 k 2 12 + k 1 2 k 2 11 + ( k 1 1 ) 2 k 1 2 ] / 6 . F o r p ˜ 22 , P 2 ( j 2 ) = k 2 j j / 2 + ( k 1 j ) 2 / 2 , P 2 ( 12 ) = k 2 12 + k 1 1 k 1 2 . F o r p ˜ 24 , P 2 ( j 4 ) = k 3 j 4 / 24 + k 1 j k 2 j 3 / 6 , P 2 ( 1 3 2 ) = k 3 1112 / 6 + k 1 1 k 2 112 / 3 + k 1 2 k 2 111 / 3 , P 2 ( 1 2 2 2 ) = k 3 1122 / 4 + k 1 1 k 2 112 / 2 + k 1 2 k 2 122 / 2 ,
F o r p ˜ 26 , P 2 ( j 6 ) = ( k 2 j j j ) 2 / 72 , P 2 ( 1 5 2 ) = k 2 111 k 2 112 / 12 , P 2 ( 1 4 2 2 ) = k 2 111 k 2 122 / 12 + ( k 2 112 ) 2 / 8 , P 2 ( 1 3 2 3 ) = k 2 111 k 2 222 / 36 + k 2 112 k 2 122 / 4 , F o r p ˜ 35 , P 3 ( j 5 ) = k 4 j 5 / 120 + k 3 j 4 k 1 j / 24 + k 2 j j j [ k 2 j j + ( k 1 j ) 2 ] / 12 , P 3 ( 1 4 2 ) / 5 = P 3 1 4 2 = k 4 1 4 2 / 120 + S 1 / 24 + S 2 / 12 + S 3 / 12 w h e r e S 1 = ( 4 k 1 1 k 3 1112 + k 1 2 k 3 1111 ) / 5 b y ( ) , S 2 = ( 3 k 2 11 k 2 112 + 2 k 2 12 k 2 111 ) / 5 b y ( ) , S 3 = [ 2 k 1 1 k 1 2 k 2 111 + 3 ( k 1 1 ) 2 k 2 112 ] / 5 b y ( ) , P 3 ( 1 3 2 2 ) = 10 P 3 1 3 2 2 = k 4 1 3 2 2 / 12 + k 3 1 3 2 k 1 2 / 3 + k 3 1122 k 1 1 / 4 + k 3 1 3 ( k 1 2 ) 2 / 12 + k 3 112 k 1 2 k 1 2 / 2 + k 3 122 ( k 1 1 ) 2 / 4 . F o r p ˜ 37 , P 3 ( j 7 ) = k 3 j 4 k 2 j 3 / 144 + ( k 2 j 3 ) 2 k 1 j / 72 , P 3 ( 1 6 2 ) = 7 ( S 4 / 144 + S 5 / 72 ) w h e r e b y ( ) , S 4 = ( 3 k 2 112 k 3 1111 + 4 k 2 111 k 3 1112 ) / 7 , S 5 = [ k 1 2 ( k 2 111 ) 2 + 6 k 1 1 k 2 111 k 2 112 ] / 7 , s o t h a t P 3 ( 1 6 2 ) = k 2 112 k 3 1111 / 48 + k 2 111 k 3 1112 / 36 + k 1 2 ( k 2 111 ) 2 / 72 + k 1 1 k 2 111 k 2 112 / 12 . P 3 ( 1 5 2 2 ) = k 3 1 4 k 2 122 / 48 + k 3 1 3 2 k 2 112 / 12 + k 3 1122 k 2 111 / 24 + 5 k 2 111 k 2 112 k 1 2 / 24 + k 2 111 k 2 122 k 1 1 / 12 + ( k 2 112 ) 2 k 1 1 / 12 , P 3 ( 1 4 2 3 ) = 35 P 3 1 4 2 3 = A / 144 + B / 72 f o r A = k 3 1 4 k 2 222 + 13 k 3 1 3 2 k 2 122 + 17 k 3 11 22 k 2 112 + k 3 1 222 k 2 111 , B = ( k 2 111 k 2 222 + 15 k 2 112 k 2 122 ) k 1 1 + ( 9 k 2 111 k 2 122 + 10 k 2 112 k 2 112 ) k 1 2 . F o r p ˜ 39 , P 3 ( j 9 ) = ( k 2 j 3 ) 3 / 6 3 , P 3 ( 1 8 2 ) = 9 ( k 2 111 ) 2 k 2 112 / 6 3 , a n d f o r a ¯ 123 = k ¯ 2 123 , 6 3 P 3 ( 1 7 2 2 ) = 9 ( a 111 ) 2 a 122 + 3 a 111 ( a 112 ) 2 , 6 3 P 3 ( 1 6 2 3 ) = 3 [ ( a 111 ) 2 a 222 + 18 a 111 a 112 a 122 + 9 ( a 112 ) 3 ] , 6 3 P 3 ( 1 5 2 4 ) = 9 [ a 111 a 112 a 222 + 15 a 111 ( a 122 ) 2 + 27 ( a 112 ) 2 a 122 ] .
(18) and (22) now give the distribution and density of X n = n 1 / 2 ( w ^ w ) to O ( n 2 ) . Set
H a b * = ( 1 ) a ( 2 ) b Φ V ( x ) . S o H a b * = H a 1 , b 1 ϕ V ( x ) i f a > 0 , b > 0 , H a 0 * = x 2 H a 1 , 0 ϕ V ( x ) d x 2 i f a > 0 , H 0 b * = x 1 H 0 , b 1 ϕ V ( x ) d x 1 i f b > 0 .
Then P r k ( x ) of (21) is given by replacing H a b by H a b * in the expressions above for p ˜ r k . That is,
P r k ( x ) = b = 0 k P r ( 1 k b 2 b ) H k b , b * .
(18) and (21) now give P r o b ( X n x ) to O ( n 2 ) for X n = n 1 / 2 ( w ^ w ) .

4. The Conditional Density and Distribution

For q = q 1 + q 2 , q 1 1 , and q 2 1 , partition x , y = V 1 x , X n = n 1 / 2 ( w ^ w ) and X N q ( 0 , V ) , as x 1 x 2 , y 1 y 2 , X n 1 X n 2 , X 1 X 2 , where x i , y i , X n i are vectors of length q i . Partition V , V 1 as ( V i j ) , ( V i j ) where V i j , V i j are q i × q j .
The conditional density of X n 1 given ( X n 2 = x 2 ) , is
p 1 2 ( x 1 ) = p X n ( x ) / p X n 2 ( x 2 ) = ϕ 1 2 ( x 1 ) ( 1 + S ) / ( 1 + S 2 )
Preprints 110474 i019
Preprints 110474 i020
p r * ( x 2 ) is p ˜ r ( x ) of (22) for X n 2 , and ϕ 1 2 ( x 1 ) is the density of X 1 | ( X 2 = x 2 ) . By (37)–(39), §2.5 of Anderson (1958),
ϕ 1 2 ( x 1 ) = ϕ V ( x ) / ϕ V 22 ( x 2 ) = ϕ V 1 2 ( x 1 μ 1 2 ) ,
Preprints 110474 i021
The distribution of X 1 | ( X 2 = x 2 ) is
Φ 1 2 ( x 1 ) = Φ V 1 2 ( x 1 μ 1 2 ) .
Preprints 110474 i022
By (22), for r 1 and P ¯ r 1 k of (14)–(16),
p r * ( x 2 ) = k = 1 3 r [ p r k * : k r e v e n ] , w h e r e p r k * = P ¯ r 1 k H ¯ q 2 1 k
and H ¯ q 2 1 k is given by replacing y = V 1 x and ( V i j ) = V 1 in H ¯ q 1 k by
z = V 22 1 x 2 a n d ( U i j ) = V 22 1 ,
and now implicit summation in (41) is for i 1 , , i k over q 1 + 1 , , q . So,
p 1 * ( x 2 ) = k = 1 , 3 p 1 k * , p 11 * = i 1 = q 1 + 1 q k ¯ 1 1 H ¯ q 2 1 , p 13 * = i 1 , i 2 , i 3 = q 1 + 1 q k ¯ 2 1 3 H ¯ q 2 1 3 / 6 , H ¯ q 2 1 = z ¯ 1 , H ¯ q 2 1 3 = z ¯ 1 z ¯ 2 z ¯ 3 3 U ¯ 12 z ¯ 3 ,
Preprints 110474 i023
p 3 * ( x 2 ) = k = 1 , 3 , 5 , 7 , 9 p 3 k * , and so on. For B ˜ r s ( e ) of (7), set
B r * = B r * ( e ) = s = 0 r B ˜ r s ( e ) . S o , B 0 * = 1 B 1 * = e 1 , B 2 * = e 2 + e 1 2 , B 3 * = e 3 + 2 e 1 e 2 + e 1 3 , a n d ( 1 + S 2 ) 1 = r = 0 n r / 2 C r w h e r e C r = B r * ( f ) f o r f r o f ( 36 ) :
Preprints 110474 i024
So the conditional density p 1 2 ( x 1 ) of (34), relative to ϕ V 1 2 ( x 1 ) of (37), is
p 1 2 ( x 1 ) / ϕ 1 2 ( x 1 ) r = 0 n r / 2 D r w h e r e
Preprints 110474 i025
Preprints 110474 i026
Preprints 110474 i027
So now we have the conditional density to O ( n 2 ) . The expansion for the conditional distribution about Φ 1 2 ( x 1 ) of (39), is
P 1 2 ( x 1 ) = P r o b . ( X n 1 x 1 | X n 2 = x 2 ) Φ 1 2 ( x 1 ) + r = 1 n r / 2 G r w h e r e G r = x 1 D r d Φ 1 2 ( x 1 ) = x 1 D r f o r s h o r t
Preprints 110474 i028
Preprints 110474 i029
Preprints 110474 i030
Preprints 110474 i031
This gives g r in terms of I ¯ 1 k , given by (54) in terms of θ and derivatives of Φ V ( x ) . (51) now gives G r in terms of f r of (36). So G 1 , G 2 , G 3 and (50) give the conditional distribution to O ( n 2 ) . Alternatively, as H ¯ q 1 k is a polynomial in x 1 = ( x 11 , , x 1 q 1 ) , by (37), I ¯ 1 k = x 1 H ¯ q 1 k d Φ 1 2 ( x 1 ) is linear in
x 1 x 1 j 1 x 1 j s d Φ 1 2 ( x 1 ) = u ( μ 1 2 + u ) 1 j 1 ( μ 1 2 + u ) 1 j s d Φ V 1 2 ( u )
for 0 s k where u = x 1 μ 1 2 . We now illustrate this.
The case q 1 = 1 . So
x 1 = x 1 , X 1 = X 1 , X n 1 = X n 1 , V 11 = V 11 . S e t σ 1 2 = V 1 2 1 / 2 ,
Preprints 110474 i032
By (39), U n | ( X n 2 = x 2 ) L U N ( 0 , 1 ) . By (50), for C r of (46),
P n ( u ) = P r o b ( U n u | ( X n 2 = x 2 ) ) r = 0 n r / 2 G r ( u ) w h e r e G r ( u ) = C r g r ( u ) , G 0 ( u ) = g 0 ( u ) = Φ ( u ) , a n d f o r r 1 , g r ( u ) = g r
given by (52) in terms of G r k ( u ) = G r k . For θ of (53), by (37),
θ 1 ϕ V ( x ) d x 1 = ϕ ( u ) d u . Set
H 1 k [ y ] = h 1 k ( u ) = s = 0 k h s 1 k u s = H q 1 k a t y = V 1 x ,
Preprints 110474 i033
By (53),
G r k = P ¯ r 1 k I ¯ 1 k w h e r e I ¯ 1 k = s = 0 k h ¯ s 1 k γ s a n d γ s = u u s ϕ ( u ) d u :
Preprints 110474 i034
Preprints 110474 i035
G r 3 = P ¯ r 1 3 I ¯ 1 3 f o r I ¯ 1 3 = s = 0 3 h ¯ s 1 3 γ s , w h e r e h ¯ 0 1 3 = Π j = 1 3 a ¯ j 3 a ¯ 1 V ¯ 23 , h ¯ 1 1 3 = 3 b ¯ 1 ( a ¯ 2 a ¯ 3 V ¯ 23 ) ,
h ¯ 2 1 3 = 3 a ¯ 1 b ¯ 2 b ¯ 3 , h ¯ 3 1 3 = Π j = 1 3 b ¯ j .

5. The Case q 1 = q 2 = 1

In this case q = 2 and for z , U 11 of (42) and u of (55),
x j = x j , V i 1 i 2 = V i 1 i 2 , V i 1 i 2 = V i 1 i 2 , V 1 2 = V 11 V 12 2 V 22 1 , z = V 22 1 x 2 , U 11 = V 22 1 , Φ 1 2 ( x 1 ) = Φ V 1 2 ( x 1 μ 1 2 ) = Φ ( u ) . B y ( 41 ) , f o r r 1 , p r * ( x 2 ) = k = 1 3 r [ p r k * : k r e v e n ] , w h e r e p r k * = P r ( 2 k ) H k * , H k * = H k ( x 2 , V 22 ) = σ 2 k H k ( σ 2 x 2 ) f o r σ 2 = V 22 1 / 2 ,
and H k ( x ) of (6). For example by (30), H 3 * = z 3 3 V 22 1 z H 3 ( x ) .   p ˜ r k and P r k ( x ) are given in §3 in terms of P r ( 1 a 2 b ) = a + b a P r 1 a 2 b .
(47) gives D r in terms of p ˜ r and p r * , which are given for r 3 by (22) in terms of p ˜ r k of §3. For H a b of (32), set
H a b ( u ) = s = 0 a + b H a b s u s = H a b a t y = V 1 x , x 1 = μ 1 2 + σ 1 2 u .
For example,
H 10 ( u ) = y 1 , H 01 ( u ) = y 2 , y i = h 0 i + h 1 i u o f ( ) . S o H 10 s = h s 1 , H 01 s = h s 2 .
Preprints 110474 i036
for γ s of (59) and (60). For example
G r 1 = P r ( 1 ) I 10 1 + P r ( 2 ) I 01 1 f o r I 10 1 = I 1 , I 01 1 = I 2 o f ( 62 ) . G r 3 = a = 0 3 P r ( 1 a 2 3 a ) I a , 3 a 3 . S o w e n e e d I a b 3 = s = 0 3 H a b s γ s f o r a b = 30 , 21 , 12 , 03 .
H 30 = H 2 111 = s = 0 3 h s 111 u s o f ( 57 ) w h e r e b y ( 63 ) , h 0 111 = a 1 3 3 a 1 V 11 , h 1 111 = 3 b 1 [ a 1 2 V 11 ] , h 2 111 = 3 a 1 b 1 2 , h 3 111 = b 1 3 . H 21 = H 2 112 s = 0 3 h s 112 u s w h e r e h 0 112 = a 1 2 a 2 2 a 1 V 12 a 2 V 11 , h 1 112 = 2 h 0 1 [ h 0 1 h 0 2 V 12 ] + h 0 2 [ ( h 0 1 ) 2 V 11 ] , h 2 112 = 2 a 1 b 1 b 2 + a 2 b 1 2 , h 3 112 = b 1 2 b 2 ,
and H b a s is giving by reversing 1 and 2 in H a b s . Alternatively, we can use
Theorem 1. 
Set θ = ϕ V 22 ( x 2 ) . For k r even, G r k of (53) is given by
G r k = θ 1 [ P r ( 2 k ) x 1 H 0 k ϕ V ( x ) d x 1 b r k ϕ V ( x ) ] , f o r r 1 , k 1 ,
Preprints 110474 i037
Preprints 110474 i038
Preprints 110474 i039
PROOF For I ¯ 1 k of (54),
G r k = P ¯ r 1 k I ¯ 1 k = P r ( 2 k ) I ( 2 k ) + a = 1 k P r ( 1 a 2 k a ) I ( 1 a 2 k a ) , w h e r e θ I ( 2 k ) = x 1 H 0 k ϕ V ( x ) d x 1 , a n d f o r a 1 , θ I ( 1 a 2 b ) = H a 1 , b ϕ V ( x ) .
This gives G r k , and so g r of (52) and so P 1 2 to O ( n ( r + 1 ) / 2 ) , in terms of the coefficients P r ( 1 a 2 b ) . So (67) gives g 1 , g 2 , g 3 in terms of P r ( 1 a 2 b ) of §3 via
B 1 = k = 1 , 3 b 1 k w h e r e b 11 = P 1 ( 1 ) = k 1 1 , b 13 = P 1 ( 1 3 ) H 20 + P 1 ( 112 ) H 11 + P 1 ( 122 ) H 02 = ( k 2 111 H 20 + 3 k 2 112 H 11 + 3 k 2 122 H 02 ) / 6 , B 2 = k = 2 , 4 , 6 b 2 k w h e r e b 22 = P 2 ( 11 ) H 10 + P 2 ( 12 ) H 01 , b 24 = P 2 ( 1 4 ) H 30 + P 2 ( 1 3 2 ) H 21 + P 2 ( 1 2 2 2 ) H 12 + P 2 ( 1 2 3 ) H 03 . b 26 = P 2 ( 1 6 ) H 50 + P 2 ( 1 5 2 ) H 41 + P 2 ( 1 2 5 ) H 05 + 12 2 P 2 ( 1 4 2 2 ) H 32 + P 2 ( 1 3 2 3 ) H 23 ,
B 3 = k = 1 , 3 , 5 , 7 , 9 ( P 3 ( 2 k ) H 0 k , b 3 k ) f o r b 3 k o f ( 66 ) .
The explicit form for (66), despite the work needed to obtain H a b s of (64).
Example 2. 
If the distribution of w ^ is symmetric about w, then for r odd, p r ( x ) = P r ( x ) = 0 , and the non-zero P ¯ 2 1 k are
P ¯ 2 12 = k ¯ 2 12 , P ¯ 2 1 4 = k ¯ 3 1 4 / 24 .
Example 3. 
Let w ^ be a sample mean. Then E w ^ = w , and only the leading coefficients in (11) are non-zero. So k ¯ 1 1 = P ¯ 1 1 = p ˜ 11 = p 11 * = 0 . The non-zero P r 1 k were given in Example 2.1. For q = 2 p ˜ r k , P r k ( x ) are given by §3 with these non-zero P r 1 k , and p ˜ 11 = p ˜ 22 = p ˜ 31 = 0 .
F o r p ˜ 13 , P 1 ( j 3 ) = k 2 j 3 / 3 ! , P 1 ( 1 2 2 ) = k 2 112 / 2 , P 1 ( 1 2 2 ) = k 2 122 / 2 . F o r p ˜ 24 , P 2 ( j 4 ) = k 3 j 4 / 4 ! , P 2 ( 1 3 2 ) = k 3 1112 / 6 , P 2 ( 1 2 3 ) = k 3 1222 / 6 , P 2 ( 1 2 2 2 ) = k 3 1122 / 4 . F o r p ˜ 35 , P 3 ( j 5 ) = k 4 j 5 / 5 ! , P 3 ( 1 4 2 ) / 5 = k 4 1 4 2 / 5 ! , P 3 ( 1 3 2 2 ) = k 4 1 3 2 2 / 12 . F o r p ˜ 37 , P 3 ( j 7 ) = k 3 j 4 k 2 j 3 / 144 , P 3 ( 1 6 2 ) = k 2 112 k 3 1111 / 48 + k 2 111 k 3 1112 / 36 , P 3 ( 1 5 2 2 ) = k 3 1 4 k 2 122 / 48 + k 3 1 3 2 k 2 112 / 12 + k 3 1122 k 2 111 / 24 , P 3 ( 1 4 2 3 ) = A / 144 f o r A = k 3 1 4 k 2 222 + 13 k 3 1 3 2 k 2 122 + 17 k 3 11 22 k 2 112 + k 3 1 222 k 2 111 .
P ¯ 2 1 6 needed for p ˜ 26 of §3 does not simplify. Nor does P ¯ 3 1 9 of §3 needed for p ˜ 35 .
Example 4. 
Consider the classical problem of the distribution of a sample mean, given the sample variance. So q = 2 . Let w ^ 2 , w ^ 2 be the usual unbiased estimates of the 1st 2 cumulants w 1 , w 2 from a univariate random sample of size n from a distribution with rth cumulant κ r . So w 1 = κ 1 , w 2 = κ 2 . By the last 2 equations of §12.15 and (12.35)–(12.38) of Stuart and Ord (1991), the cumulant coefficients needed for P ¯ r 1 k of (14) for r 3 , that is, the coefficients needed for the conditional density to O ( n 2 ) are
k 1 11 = κ 2 , k 1 12 = κ 3 , k 1 22 = κ 4 + 2 κ 2 2 , V = κ 2 κ 3 κ 3 κ 4 + 2 κ 2 2 , k 1 1 = k 1 2 = 0 , k 2 111 = κ 3 , k 2 112 = k 2 122 = 0 , k 2 222 = ( 6 ) + 12 ( 24 ) + 4 ( 3 2 ) + 8 ( 2 3 ) ,
k 2 11 = k 2 12 = 0 , k 2 22 = 2 ( 2 2 ) , k 3 1111 = ( 4 ) , k 3 1112 = ( 5 ) , k 3 1122 = k 3 1222 = 0 , k 3 2222 = ( 8 ) + 24 ( 26 ) + 32 ( 35 ) + 32 ( 4 2 ) + 144 ( 2 2 4 ) + 96 ( 23 2 ) + 48 ( 2 4 ) , k 2 1 = k 2 2 = k 3 111 = k 3 112 = k 3 122 = 0 , k 3 222 = 12 ( 24 ) + 16 ( 2 3 ) , k 4 1 5 = k 4 1 3 2 2 = k 4 1 2 2 3 = k 4 1 2 4 = 0 , k 4 1 4 2 = ( 6 ) , k 4 2 5 = ( 10 ) + 40 ( 28 ) + 80 ( 37 ) + 200 ( 46 ) + 96 ( 5 2 ) + 480 ( 2 2 6 ) + 1280 ( 235 ) + 1280 ( 24 2 ) + 960 ( 3 2 4 ) + 1920 ( 2 3 4 ) + 1920 ( 2 2 3 2 ) + 384 ( 2 5 ) , w h e r e ( i 1 j 1 i 2 j 2 ) = κ i 1 j 1 κ i 2 j 2 .
(47) gives D r in terms of p ˜ r and p r * , that is, in terms of p ˜ r k and p r k * of §3 in terms of P r ( 1 a 2 b ) of (31). In this example, many of these are 0. By (15)–(17) and the Appendix A, the non-zero P r ( 1 a 2 b ) are in order needed,
P 1 ( j 3 ) = k 2 j j j / 6 , P 2 ( 2 2 ) = κ 2 2 , P 2 ( j 4 ) = k 3 j j j j / 24 , P 2 ( 1 3 2 ) = k 3 1112 / 6 = κ 5 / 6 , P 2 ( j 6 ) = ( k 2 j j j ) 2 / 72 , P 2 ( 1 3 2 3 ) = k 2 111 k 2 222 / 36 . P 3 ( j 3 ) = k 3 j j j / 6 . P 3 ( 2 5 ) = k 4 2 5 / 120 + k 2 22 k 2 222 / 12 . P 3 ( j 7 ) = k 2 j j j k 3 j 4 / 144 , P 3 ( 1 6 2 ) = k 2 111 k 3 1112 / 36 , P 3 ( 1 4 2 3 ) = k 2 222 k 3 1 4 / 144 , P 3 ( 1 3 2 4 ) = ( k 2 111 k 3 2 4 + k 2 222 k 3 1 3 2 ) / 144 , P 3 ( j 9 ) = ( k 2 j j j ) 3 / 6 3 , P 3 ( 1 6 2 3 ) = 3 ( k 2 111 ) 2 k 2 222 / 6 3 , P 3 ( 1 3 2 6 ) = 3 k 2 111 ( k 2 222 ) 2 / 6 3 . S o , p ˜ 11 = 0 , p ˜ 13 = P 1 ( 1 3 ) H 30 + P 1 ( 2 3 ) H 03 , , p ˜ 22 = P 2 ( 2 2 ) H 02 , p ˜ 24 = P 2 ( 1 4 ) H 40 + P 2 ( 2 4 ) H 04 + P 2 ( 1 3 2 ) H 31 , p ˜ 26 = P 2 ( 1 6 ) H 60 + P 2 ( 2 6 ) H 06 + P 2 ( 1 3 2 3 ) H 33 , p ˜ 31 = 0 , p ˜ 33 = P 3 ( 1 3 ) H 30 + P 3 ( 2 3 ) H 03 , p ˜ 35 = P 3 ( 2 5 ) H 05 , p ˜ 37 = P 3 ( 1 7 ) H 70 + P 3 ( 2 7 ) H 07 + P 3 ( 1 6 2 ) H 61 + P 3 ( 1 4 2 3 ) H 43 + P 3 ( 1 3 2 4 ) H 34 + P 3 ( 1 2 2 5 ) H 25 + P 3 ( 1 2 6 ) H 16 + P 3 ( 2 7 ) H 07 , p ˜ 39 = P 3 ( 1 9 ) H 90 + P 3 ( 1 6 2 3 ) H 63 + P 3 ( 1 3 2 6 ) H 36 + P 3 ( 2 9 ) H 09 .
(24)–(26) now give p ˜ r ( x ) and p r ( x ) * for r 3 . By (18) and (47), this gives the conditional density p 1 2 ( x 1 ) to O ( n 2 ) . (67) gives g r needed for the conditional distribution P 1 2 ( x 1 ) to O ( n 2 ) in terms of ( A r , B r ) of (68). So
B 1 = b 13 = P 1 ( 1 3 ) H 20 , B 2 = k = 2 , 4 , 6 b 2 k w h e r e b 22 = P 2 ( 11 ) H 10 + P 2 ( 12 ) H 01 , b 24 = P 2 ( 1 4 ) H 30 + P 2 ( 1 3 2 ) H 21 , b 26 = P 2 ( 1 6 ) H 50 + P 2 ( 1 3 2 3 ) H 23 , B 3 = k = 1 , 3 , 5 , 7 , 9 b 3 k w h e r e b 31 = 0 , b 33 = P 3 ( 1 3 ) H 20 , b 35 = 0 , b 37 = P 3 ( 1 7 ) H 60 + P 3 ( 1 6 2 ) H 51 + P 3 ( 1 4 2 3 ) H 33 + P 3 ( 1 3 2 4 ) H 24 ,
b 39 = P 3 ( 1 9 ) H 80 + P 3 ( 1 6 2 3 ) H 53 + P 3 ( 1 3 2 6 ) H 26 .

6. Conditional Cornish-Fisher Expansions

Suppose that q 1 = 1 . Here we invert the conditional distribution (56), to obtain its its extended Cornish-Fisher expansions similar to (4). For any function f ( u ) with finite derivatives, set f j = ( d / d u ) j f ( u ) .
Lemma 1. 
Suppose that G 0 ( u ) : R ( 0 , 1 ) is 1 to 1 increasing with jth derivative G 0 j , and for some t R ,
P ( u ) = G 0 ( u ) + S 1 w h e r e S 1 r = 1 t r G r ( u ) . S e t G r = G r ( u ) , Q ( u ) = G 0 1 ( P ( u ) ) .
Preprints 110474 i040
Preprints 110474 i041
Preprints 110474 i042
Preprints 110474 i043
PROOF Set B r j = B ˜ r j ( J ) .
F o r r 1 , S 2 j r = j t r B r j . P ( u ) = G 0 ( u + S 2 ) j = 0 G 0 j S 2 j . S o b y ( 8 ) , G r ( u ) = j = 1 r G 0 j B r j / j ! = G 0 1 J r + j = 2 r G 0 j B r j / j ! .
J 2 = G 0 1 1 [ G 2 G 0 2 J 1 2 / 2 ] , J 3 = G 0 1 1 [ G 3 ( G 0 2 J 1 J 2 G 0 3 J 1 3 / 6 ] . u = Q ( u + S 3 ) j = 0 S 3 j Q j / j ! w h e r e Q j r = 0 t r J r j . S 3 j r = j t r B r j . S o u = r = 0 t r j = 0 r Q j B r j / j ! = k = 0 t k A k w h e r e A k = r + s = k C r s . A l s o C 0 s = J s , C r 0 = K r f o r r 1 . S o 0 = A k = K k + s = 1 k C k s , s : A 0 = C 00 = J 0 , A 1 = C 01 + C 10 . 0 = A 1 = J 1 + K 1 , K 1 = C 01 = J 1 . 0 = A 2 = C 02 + C 11 + C 20 w h e r e C 11 = j = 0 1 J 1 j B r j = K 1 J 1 1 . S o K 2 = J 2 K 1 J 1 1 = J 1 J 1 1 J 2 .
One obtains K 3 similarly. A different form for (73) was given in Theorem A2 of Withers (1983). So
K 4 = J 4 j = 1 3 J 4 j 1 K j J 2 2 J 1 2 / 2 + J 1 2 J 1 K 2 + J 1 3 J 1 3 / 6 , K 5 = J 5 j = 1 4 J 5 j 1 K j J 3 2 J 1 2 / 2 + J 2 2 J 1 K 2 J 1 2 ( K 2 2 / 2 + K 1 K 3 ) + J 2 3 J 1 3 / 6 J 1 3 J 1 2 K 2 / 2 J 1 4 J 1 4 / 24 .
We can now give the quantiles of the conditional distribution (56).
Theorem 2. 
S u p p o s e t h a t P n ( u ) r = 0 n r / 2 G r ( u ) . S e t G r = G r ( u ) . T h e n Φ 1 ( P n ( u ) ) r = 0 n r / 2 J r , a n d P n 1 ( Φ ( u ) ) r = 0 n r / 2 K r , w h e r e J k = r = 1 k ϕ ( u ) r J k r , K k = r = 1 k ϕ ( u ) r K k r ,
J k 1 = K k 1 = G k , J 22 = u G 1 2 / 2 , J 32 = u G 1 G 2 , J 33 = ( 2 u 2 + 1 ) G 1 2 / 6 , J 11 , 1 = u G 1 + G 1 1 , J 12 , 1 = ( u 2 + 1 ) G 1 + ( u + 1 ) G 1 1 + G 1 2 , K 22 = u G 1 / 2 + G 1 1 , K 32 = G 1 1 ( G 2 + G 2 1 ) + u G 1 G 2 1 , K 33 = j = 0 3 d j G 1 j f o r d 0 = u G 2 G 1 1 , d 1 = u 2 G 2 + ( u 1 ) G 1 1 2 , d 2 = u 2 / 2 + 2 ( u 2 u ) G 1 1 G 1 2 / 2 , d 3 = ( 6 u 3 5 u 2 + 3 u 4 ) / 6 .
A simpler formula for J k is
J k = j = 1 k c j ( u ) B ˜ k j ( λ ) / j ! w h e r e λ r = ϕ ( u ) 1 G r ( u ) ,
Preprints 110474 i044
PROOF Apply Lemma 6.1 to (56) with t = n 1 / 2 . Take G 0 ( u ) = Φ ( u ) and G r = C r g r for g r given by (52) in terms of G r k of (59). So
G 0 1 = ϕ ( u ) a n d f o r j 1 , G 0 j = ( 1 ) j 1 H j 1 ( u ) ϕ ( u ) ,
and for P ( u ) = P n ( u ) of (56), Φ 1 ( P n ( u ) ) and P n 1 ( Φ ( u ) ) are given by (70) and (72) in terms of J k , K k and their derivatives. These are given by
J k i = r = 1 k ϕ ( u ) r J k i , r , K k i = r = 1 k ϕ ( u ) r K k i , r , w h e r e J 21 , 1 = G 2 1 , J 21 , 2 = u G 2 + ( u 2 + 1 / 2 ) G 1 2 + u G 1 G 1 1 , K 32 = J 32 + J 11 , 1 ( J 21 + J 21 , 1 ) , K 33 = J 33 + J 11 , 1 ( J 22 + J 22 , 1 ) J 11 J 11 , 1 2 J 11 2 J 12 , 1 / 2 .
(74) follows from (3.2) of Withers (1984). □
We had hoped to read the conditional a r i off the conditional density. But the expansion (47) cannot be put into the form (3) if k ¯ 2 1 3 0 , as the coefficient of x 2 in h ¯ 1 ( x ) of (5) is 0. So the conditional estimate is generally not a standard estimate. (An exception is when w ^ N q ( w , n 1 V ) since then k ¯ 2 1 3 = 0 and by (50), P 1 2 ( x 1 ) = Φ 1 2 ( x 1 ) of (39). We have yet to see what exponential families this extends to.) It might be possible to remedy this by extending the results here to Type B estimates. But there seems little point in doing so.

7. Conclusions

§2 and the Appendix A give the density and distribution of X n = n 1 / 2 ( w ^ w ) to O ( n 2 ) , for w ^ any standard estimate, in terms of certain functions of the cumulants coefficients k ¯ j 1 r of (11), the coefficients P ¯ r 1 k of (14)–(17). Most estimates of interest are standard estimates, including functions of sample moments, like the sample correlation, and any multivariate function of k-statistics,. §3 gave the density and distribution of X n = n 1 / 2 ( w ^ w ) in more detail when q = 2 using the dual notation P r ( 1 a 2 b ) . §4 gave the conditional density and distribution of X n 1 given X n 2 to O ( n 2 ) where X n 1 X n 2 is any partition of X n . The expansion (47) gives the conditional density of a standard estimate in terms of D r of (47). The conditional distribution (50) to O ( n 2 ) requires the function I ¯ 1 k of (54), or its expansion (59) or (65). §6 gave the extended Cornish-Fisher expansions for the quantiles of the conditional distribution when q 1 = 1 .

8. Discussion

A good approximation for the distribution of an estimate, is vital for statistical inference. It enables one to explore the distribution’s dependence on underlying parameters, such as correlation. Our analytic method avoids the need for simulation or jack-knife or bootstrap methods while providing greater accuracy than them. Hall (1992) uses the Edgeworth expansion to show that the bootstrap gives accuracy to O ( n 1 ) . Hall (1988) says that “2nd order correctness usually cannot be bettered”. Fortunately this is not true for our analytic method. Simulation, while popular, can at best shine a light on behaviour when there is only a small number of parameters.
Estimates based on a sample of independent but not identically distributed random vectors, are also generally standard estimates. For example for a univariate sample mean w ¯ = n 1 j = 1 n X j n where X j n has rth cumulant κ r j n , then κ r ( w ¯ ) = n 1 r κ r where κ r = n 1 j = 1 n κ r j n is the average rth cumulant. For some examples, see Skovgaard (1981a, 1981b) and Withers and Nadarajah (2010a, 2020b). The last is for a function of a weighted mean of complex random matrices.
A promising approach is the use of conditional cumulants. §6.2 of McCullagh (1984) uses conditional cumulants to give the conditional density of a sample mean to O ( n 3 / 2 ) . §5.6 of McCullagh (1987) gave formulas for the 1st 4 cumulants conditional on X 2 = x 2 when X 1 and X 2 are uncorrelated. He says that assumption can be removed but gives no details how. That might give an alternative to our approach, but seems unlikely as the conditional estimate is generally not a standard estimate.
(7.5) of Barndoff-Nielsen and Cox (1989) gave the 3rd order expansion for the conditional density of a sample mean to O ( n 3 / 2 ) , but did not attempt to integrate it.
Here we have only considered expansions about the normal. However expansions about other distributions can greatly reduce the number of terms by matching the leading bias coefficient. The framework for this is Withers and Nadarajah (2010a). For expansions about a matching gamma, see Withers and Nadarajah (2011, 2014).
The results here can be extended to tilted (saddlepoint) expansions by applying the results of Withers and Nadarajah (2010a). Tilting was 1st used in statistics by Daniels (1954). He gave an approximation to the density of a sample mean. A conditional distribution by tilting was first given by Skovgaard (1987) up to O ( n 1 ) for the distribution of a sample mean conditional on correlated sample means. For some examples, see Barndoff-Nielsen and Cox (1989). For other some results on conditional distributions, see Pfanzagl (1979), Booth et al. (1992), DiCiccio et al. (1993), Hansen (1994), Moreira (2003), Chapter 4 of Butler (2007), and Kluppelberg and Seifert (2020). The results given here form the basis for constructing confidence intervals and confidence regions. See Withers (1989).

Appendix A. The Coefficients P ¯ r 1-k Needed for (14)

Here we give the coefficients P ¯ r 1 k needed for (14) for r 3 using the symmetrising operator S . They are given for r = 1 by (15), and for r = 2 , 3 by (16)–(17) and the following.
P ¯ 2 1 4 n e e d s S k ¯ 1 1 k ¯ 2 234 w h e r e S a 1 b 234 = ( a 1 b 234 + a 2 b 341 + a 3 b 412 + a 4 b 123 ) / 4 . P ¯ 2 1 6 n e e d s S k ¯ 2 1 3 k ¯ 2 4 6 w h e r e
S a 1 3 a 4 6 = ( a 1 3 a 4 6 + a 124 a 356 + a 125 a 346 + a 126 a 346 + a 134 a 256
+ a 135 a 246 + a 136 a 245 + a 145 a 236 + a 146 a 235 + a 156 a 234 ) / 10 . P ¯ 3 1 3 n e e d s S k ¯ 2 12 k ¯ 1 3 w h e r e S a 12 b 3 = ( a 12 b 3 + a 13 b 2 + a 23 b 1 ) / 3 .
P ¯ 3 1 5 n e e d s S k ¯ 3 1 4 k ¯ 1 5 , S k ¯ 2 12 k ¯ 2 3 5 , S k ¯ 2 123 k ¯ 1 4 k ¯ 1 5 ,
w h e r e S a 1 b 2 5 = ( a 1 b 2 5 + a 5 b 1 4 ) / 5 , S a 12 b 345 = ( 12.345 + 13.245 + 14.235 + 15.234 + 23.145 + 24.135 + 25.134
+ 34.125 + 35.124 + 45.123 ) / 10 f o r 12.345 = a 12 b 345 , f o r 1.2 . = 1.2 . 345 = a 1 a 2 b 345 , S a 1 a 2 b 345
= ( 1.2 . + 1.3 . + 1.4 . + 1.5 . + 2.3 . + 2.4 . + 2.5 . + 3.4 . + 3.5 . + 4.5 . ) / 10 .
P ¯ 3 1 7 n e e d s S k ¯ 2 123 k ¯ 3 4 7 , S k ¯ 2 123 k ¯ 2 456 k ¯ 1 7 , w h e r e
f o r 123 . = a 123 b 4 7 , S a 123 b 4 7 = ( 123 . + 124 . + + 567 . ) / 7 3 ,
S a 123 a 456 b 7 = S 123.456 . 7 = ( b 7 S 123.456 + + b 1 S 234.567 ) / 7 say where S 123.456 = S a 1 3 a 4 6 o f ( A1 ) , P ¯ 3 1 9 = S k ¯ 2 1 3 k ¯ 2 4 6 k ¯ 2 7 9 / 6 3 , where 123.456 . 789 = a 1 3 a 4 6 a 7 9 a n d 123 = 123 . S 456.789 o f ( A1 ) , S a 1 3 a 4 6 a 7 9 = [ 123 + 124 + 125 + 126 + 127 + 128 + 129 + 134 + 135 + 136 + 137 + 138 + 139 + 145 + 146 + 147 + 148 + 149 + 156 + 157 + 158 + 159 + 167 + 168 + 169 + 178 + 179 + 189 ] / 28 .

References

  1. Anderson, T. W. (1958) An introduction to multivariate analysis. John Wiley, New York.
  2. Barndoff-Nielsen, O.E. and Cox, D.R. (1989). Asymptotic techniques for use in statistics. Chapman and Hall, London.
  3. Booth, J., Hall, P. and Wood, A. (1992) Bootstrap estimation of conditional distributions. Annals Statistics, 20 (3), 1594–1610. [CrossRef]
  4. Butler, R.W. (2007) Saddlepoint approximations with applications, pp. 107–144. Cambridge University Press. [CrossRef]
  5. Comtet, L. Advanced Combinatorics; Reidel: Dordrecht, The Netherlands, 1974.
  6. Cornish, E.A. and Fisher, R. A. (1937) Moments and cumulants in the specification of distributions. Rev. de l’Inst. Int. de Statist. 5, 307–322. Reproduced in the collected papers of R.A. Fisher, 4.
  7. Daniels, H.E. (1954) Saddlepoint approximations in statistics. Ann. Math. Statist. 25, 631–650. [CrossRef]
  8. DiCiccio, T.J., Martin, M.A. and Young, G.A. (1993) Analytical approximations to conditional distribution functions. Biometrika, 80 4, 781–790. [CrossRef]
  9. Fisher, R. A. and Cornish, E.A. (1960) The percentile points of distributions having known cumulants. Technometrics, 2, 209–225.
  10. Hall, P. (1988) Rejoinder: Theoretical Comparison of Bootstrap Confidence Intervals Annals Statistics, 16 (3),9 81–985.
  11. Hall, P. (1992) The bootstrap and Edgeworth expansion. Springer, New York.
  12. Hansen, B.E. (1994) Autoregressive conditional density estimation. International Economic Review, 35 (3), 705–730. [CrossRef]
  13. Kluppelberg, C. and Seifert, M.I. (2020) Explicit results on conditional distributions of generalized exponential mixtures. Journal Applied Prob., 57 3, 760–774. [CrossRef]
  14. McCullagh, P., (1984) Tensor notation and cumulants of polynomials. Biometrika 71 (3), 461–476. McCullagh (1984). [CrossRef]
  15. McCullagh, P., (1987) Tensor methods in statistics. Chapman and Hall, London.
  16. Moreira, M.J. (2003) A conditional likelihood ratio test for structural models. Econometrica, 71 (4), 1027–1048.
  17. Pfanzagl, P. (1979). Conditional distributions as derivatives. Annals Probability, 7 (6), 1046–1050.
  18. Stuart, A. and Ord, K. (1991). Kendall’s advanced theory of statistics, 2. 5th edition. Griffin, London.
  19. Skovgaard, I.M. (1981a) Edgeworth expansions of the distributions of maximum likelihood estimators in the general (non i.i.d.) case. Scand. J. Statist., 8, 227-236.
  20. Skovgaard, I. M. (1981b) Transformation of an Edgeworth expansion by a sequence of smooth functions. Scand. J. Statist., 8, 207-217.
  21. Skovgaard, I.M. (1987) Saddlepoint expansions for conditional distributions, Journal of Applied Prob., 24 (4), 875–887. [CrossRef]
  22. Withers, C.S. (1983) Accurate confidence intervals for distributions with one parameter. Ann. Instit. Statist. Math. A, 35, 49–61. [CrossRef]
  23. Withers, C.S. (1984) Asymptotic expansions for distributions and quantiles with power series cumulants. Journal Royal Statist. Soc. B, 46, 389–396. Corrigendum (1986) 48, 256. For typos, see p23–24 of Withers (2024).
  24. Withers, C.S. (1989) Accurate confidence intervals when nuisance parameters are present. Comm. Statist. - Theory and Methods, 18, 4229–4259. [CrossRef]
  25. Withers, C.S. (2000) A simple expression for the multivariate Hermite polynomials. Statistics and Prob. Letters, 47, 165–169. [CrossRef]
  26. Withers, C.S. (2024) 5th-Order multivariate Edgeworth expansions for parametric estimates. Mathematics, 12,905, Advances in Applied Prob. and Statist. Inference. https://www.mdpi.com/2227-7390/12/6/905/pdf.
  27. Withers, C.S. and Nadarajah, S. (2010a) Tilted Edgeworth expansions for asymptotically normal vectors. Annals of the Institute of Statistical Mathematics, 62 (6), 1113–1142. For typos, see p25 of Withers (2024). [CrossRef]
  28. Withers, C.S. and Nadarajah, S. (2010b) The bias and skewness of M-estimators in regression. Electronic Journal of Statistics, 4, 1–14. http://projecteuclid.org/DPubS/Repository/1.0 /Disseminate?view=bodyid=pdfview1handle=euclid.ejs/1262876992 For typos, see p25 of Withers (2024).
  29. Withers, C.S. and Nadarajah, S. (2011) Generalized Cornish-Fisher expansions. Bull. Brazilian Math. Soc., New Series, 42 (2), 213–242. [CrossRef]
  30. Withers, C.S. and Nadarajah, S. (2014) Expansions about the gamma for the distribution and quantiles of a standard estimate. Methodology and Computing in Applied Prob., 16 (3), 693-713. For typos, see p25–26 of Withers (2024). [CrossRef]
  31. Withers, C.S. and Nadarajah, S. (2016) Expansions for log densities of multivariate estimates. Methodology and Computing in Appl. Prob.ability, 18, 911–920. [CrossRef]
  32. Withers, C.S. and Nadarajah, S. (2020) The distribution and percentiles of channel capacity for multiple arrays. Sadhana, SADH, Indian Academy of Sciences, 45 (1), 1–25. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated