A Family of Estimators of Population Variance Using Information On Auxiliary Attribute
A Family of Estimators of Population Variance Using Information On Auxiliary Attribute
A Family of Estimators of Population Variance Using Information On Auxiliary Attribute
Ashish K. Singh
College of Management Studies,
Raj Kumar Goel Institute of Technology
Florentin Smarandache
Department of Mathematics, University of New Mexico, Gallup, USA
Published in:
Rajesh Singh, F. Smarandache (Editors)
STUDIES IN SAMPLING TECHNIQUES AND TIME SERIES ANALYSIS
Zip Publishing, Columbus, USA, 2011
ISBN 978-1-59973-159-9
pp. 63 - 70
Abstract
This chapter proposes some estimators for the population variance of the variable
under study, which make use of information regarding the population proportion
(SRSWOR) scheme, the mean squared error (MSE) up to the first order of approximation
is derived. The results have been illustrated numerically by taking some empirical
1. Introduction
It is well known that the auxiliary information in the theory of sampling is used to
regression and product methods of estimation are good examples in this context. There
exist situations when information is available in the form of attribute which is highly
correlated with y. Taking into consideration the point biserial correlation coefficient
between auxiliary attribute and study variable, several authors including Naik and
63
Gupta (1996), Jhajj et. al. (2006), Shabbir and Gupta (2007), Singh et. al. (2007, 2008)
and Abd-Elfattah et. al. (2010) defined ratio estimators of population mean when the
available.
variable(s) is available, Das and Tripathi (1978), Isaki (1983), Prasad and Singh (1990),
Kadilar and Cingi (2006, 2007) and Singh et. al. (2007) have suggested various
estimators of S 2y .
In this chapter we have proposed family of estimators for the population variance
S 2y when one of the variables is in the form of attribute. For main results we confine
2
2 Sφ
t1 = s y (2.1)
s φ2
(
t 2 = s 2y + b S φ2 − s φ2 ) (2.2)
64
⎡S2 − s 2 ⎤
φ φ
t 3 = s 2y exp ⎢ ⎥ (2.3)
⎢ S φ2 − s φ2 ⎥
⎣ ⎦
respectively and b is a constant, which makes the MSE of the estimator minimum.
s 2y = S 2y (1 + e 0 ) , s φ2 = Sφ2 (1 + e1 )
and E e 02 = ( ) (δ 40 − 1)
n
, ( )
E e12 =
(δ 04 − 1)
n
, E (e 0 e1 ) =
(δ 22 − 1)
n
,
N
∑ (y i − Y ) (φ i − P )
p q
μ pq
μ pq = i =1
where δ pq =
(μ p20/ 2 μ q02/ 2 ) ,
(N − 1)
.
μ 40 μ
β 2( y ) = 2
= δ 40 and β 2(φ ) = 04
2
= δ 04
μ 02 μ 02
(t 1 − S 2y ) = S 2y (e 0 − e1 + e12 − e 0 e1 ) (2.4)
65
S 4y
MSE(t p1 ) =
n
[β ( ) + β ( ) − 2δ ]
*
2 y
*
2φ
*
22 (2.7)
S 4y ⎡ * β*2(φ ) ⎤
MSE(t p3 ) = ⎢β 2( y ) + − δ *22 ⎥ (2.8)
n ⎢⎣ 4 ⎥⎦
V (t 2 ) =
1 4
n
[
S y (λ 40 − 1) + b 2S φ2 (λ 04 − 1) − 2bS y2 S 2x (λ 22 − 1) ] (2.9)
S 2y (δ 22 − 1)
b= (2.10)
S 2x (δ 04 − 1)
Substituting the optimum value of b in (2.9), we get the minimum variance of the
estimator t 2 , as
min .V(t 2 ) =
S 4y
β *
2( y )
⎡ δ *222 ⎤ )2
( )(
2
⎢1 − * * ⎥ = Var S 1 − ρ (S2y ,Sφ2 ) ) (2.11)
n ⎣⎢ β 2( y )β 2(φ ) ⎦⎥
3. Adapted estimator
We adapt the Shabbir and Gupta (2007) and Grover (2010) estimator, to the case when
one of the variables is in the form of attribute and propose the estimator t4
⎛ S2 − s 2 ⎞
t4 = [
k 1s 2y + k 2 S φ2 ( − s φ2 )]
exp⎜
φ φ⎟
⎜ S2 + s 2 ⎟
(3.1)
⎝ φ φ⎠
Expressing equation (3.1) in terms of e’s and retaining only terms up to second degree of
e’s, we have:
[ ⎡ e 3 ⎤
t 4 = k1S 2y (1 + e 0 ) − k 2 s φ2 e1 ⎢1 − 1 + e12 ⎥
⎣ 2 8 ⎦
] (3.2)
66
Up to first order of approximation, the mean square error of t 4 is
(
MSE(t 4 ) = E t 4 − S 2y )2
[ ( ) ⎛
⎝
3 ⎞
= S 4y (k 1 − 1) + λk 12 β *2 (y ) + β *2 (φ) − 2δ 22 + λk 1 ⎜ δ *22 − β *2 (φ)⎟
2
4 ⎠
⎡ k ⎤⎤
( )
+ S φ4 k 22 λβ *2 (φ) + 2λS 2y S 2x ⎢k 1 k 2 β *2 (x ) − δ *22 − 2 β*2 (x )⎥ ⎥
2
(3.3)
⎣ ⎦⎦
1
where, λ =
n
On partially differentiating (3.3) with respect to k i (i = 1,2 ) ,we get optimum values of k 1
and k 2 , respectively as
⎛ λ ⎞
β*2 (φ)⎜ 2 − β*2 (φ)⎟
⎝ 4 ⎠
k1* =
(
2 β 2 (φ)(λA + 1) − λB 2
*
) (3.4)
and
⎡ ⎛ λ ⎞⎤
S 2y ⎢β*2 (φ)(λA + 1) − λB 2 − B⎜ 2 − β*2 (φ)⎟⎥
⎝ 4 ⎠⎦
k *2 = ⎣
(
2S 2x β 2x (φ)(λA + 1) − λB 2 ) (3.5)
where,
On substituting these optimum values of k 1 and k 2 in (3.3), we get the minimum value
of MSE of t 4 as
⎛
⎜ λS 4y β*2 (φ) ⎞⎟
λβ 2 (x ) MSE ( t 2 ) +
*
⎜ 16 ⎟
MSE ( t 2 ) ⎝ ⎠
MSE ( t 4 ) = − (3.6)
MSE ( t 2 ) ⎛ MSE (t ) ⎞
1+
S 4y 4⎜1 + 2 ⎟
⎜ 4
Sy ⎟
⎝ ⎠
67
4. Efficiency Comparison
First we have compared the efficiency of proposed estimator under optimum condition
λS 4y δ *222
( )
2
V Ŝ − MSE Ŝ
y ( )
2
p opt =
β *
−
MSE ( t 2 )
MSE ( t 2 )
2(x ) 1+
S 4y
⎛ λS 4y β *2 (φ) ⎞
λβ (x ) MSE ( t 2 ) +
⎜
* ⎟
⎜
2
16 ⎟
+ ⎝ ⎠ ≥ 0 always. (4.1)
⎛ MSE (t 2 ) ⎞
4⎜1 + ⎟
⎜ S 4 ⎟
⎝ y ⎠
Next we have compared the efficiency of proposed estimator under optimum condition
2
⎡ δ* ⎤
( )
MSE(t 2 ) − MSE Ŝ 2p = λS 4y ⎢ β 2( x ) − 22 ⎥ −
MSE( t 2 )
⎢ β*2(x ) ⎥⎦ 1 + MSE( t 2 )
opt
⎣
S 4y
⎛ λS 4y β *2 (φ) ⎞
λβ *2 (x )⎜ MSE( t 2 ) + ⎟
⎜ 16 ⎟
+ ⎝ ⎠ ≥ 0 always. (4.2)
⎛ MSE(t 2 ) ⎞
4⎜1 + ⎟
⎜ S 4 ⎟
⎝ y ⎠
Next we have compared the efficiency of proposed estimator under optimum condition
68
2
⎡ δ *22 ⎤⎥
( )
MSE(t 3 ) − MSE Ŝ 2p 4⎢
= λS y β 2 ( x ) − −
MSE( t 2 )
⎢ 2 β*2 (x ) ⎥⎦ 1 + MSE ( t 2 )
opt
⎣
S 4y
⎛ λS 4y β*2 (φ) ⎞
λβ (x )⎜ MSE( t 2 ) +
* ⎟
2 ⎜ 16 ⎟
+ ⎝ ⎠ ≥ 0 always. (4.3)
⎛ MSE(t 2 ) ⎞
4⎜1 + ⎟
⎜ S 4 ⎟
⎝ y ⎠
Finally we have compared the efficiency of proposed estimator under optimum condition
with the Regression estimator as -
⎛
⎜ λS 4y β*2 (φ) ⎞⎟
λβ 2 (x ) MSE ( t 2 ) +
*
⎜ 16 ⎟
MSE ( t 2 ) ⎝ ⎠ > 0 always.
MSE (t 2 ) − MSE ( t 4 ) = −
MSE ( t 2 ) ⎛ MSE (t ) ⎞
1+
4 4⎜1 + 2 ⎟
Sy ⎜ Sy4 ⎟
⎝ ⎠
(4.4)
5. Empirical study
We have used the data given in Sukhatme and Sukhatme (1970), p. 256.
Where, Y=Number of villages in the circle, and
φ Represent a circle consisting more than five villages.
n N S 2y S 2p λ 40 λ 04 λ 22
23 89 4.074 0.110 3.811 6.162 3.996
Estimators t0 t1 t2 t3 t4
PRE 100 141.898 262.187 254.274 296.016
Conclusion
69
References
70