Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

A Family of Estimators of Population Variance Using Information On Auxiliary Attribute

Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

Rajesh Singh, Mukesh Kumar

Department of Statistics, B.H.U., Varanasi (U.P.)-India

Ashish K. Singh
College of Management Studies,
Raj Kumar Goel Institute of Technology

Florentin Smarandache
Department of Mathematics, University of New Mexico, Gallup, USA

A Family of Estimators of Population


Variance Using Information on
Auxiliary Attribute

Published in:
Rajesh Singh, F. Smarandache (Editors)
STUDIES IN SAMPLING TECHNIQUES AND TIME SERIES ANALYSIS
Zip Publishing, Columbus, USA, 2011
ISBN 978-1-59973-159-9
pp. 63 - 70
Abstract
This chapter proposes some estimators for the population variance of the variable

under study, which make use of information regarding the population proportion

possessing certain attribute. Under simple random sampling without replacement

(SRSWOR) scheme, the mean squared error (MSE) up to the first order of approximation

is derived. The results have been illustrated numerically by taking some empirical

population considered in the literature.

Keywords: Auxiliary attribute, exponential ratio-type estimates, simple random


sampling, mean square error, efficiency.

1. Introduction

It is well known that the auxiliary information in the theory of sampling is used to

increase the efficiency of estimator of population parameters. Out of many ratio,

regression and product methods of estimation are good examples in this context. There

exist situations when information is available in the form of attribute which is highly

correlated with y. Taking into consideration the point biserial correlation coefficient

between auxiliary attribute and study variable, several authors including Naik and

63
Gupta (1996), Jhajj et. al. (2006), Shabbir and Gupta (2007), Singh et. al. (2007, 2008)

and Abd-Elfattah et. al. (2010) defined ratio estimators of population mean when the

prior information of population proportion of units, possessing the same attribute is

available.

In many situations, the problem of estimating the population variance σ 2 of study

variable y assumes importance. When the prior information on parameters of auxiliary

variable(s) is available, Das and Tripathi (1978), Isaki (1983), Prasad and Singh (1990),

Kadilar and Cingi (2006, 2007) and Singh et. al. (2007) have suggested various

estimators of S 2y .

In this chapter we have proposed family of estimators for the population variance

S 2y when one of the variables is in the form of attribute. For main results we confine

ourselves to sampling scheme SRSWOR ignoring the finite population correction.

2. The proposed estimators and their properties

Following Isaki (1983), we propose a ratio estimator

2
2 Sφ
t1 = s y (2.1)
s φ2

Next we propose regression estimator for the population variance

(
t 2 = s 2y + b S φ2 − s φ2 ) (2.2)

And following Singh et. al. (2009), we propose another estimator

64
⎡S2 − s 2 ⎤
φ φ
t 3 = s 2y exp ⎢ ⎥ (2.3)
⎢ S φ2 − s φ2 ⎥
⎣ ⎦

where s 2y and s φ2 are unbiased estimator of population variances S 2y and Sφ2

respectively and b is a constant, which makes the MSE of the estimator minimum.

To obtain the bias and MSE, we write-

s 2y = S 2y (1 + e 0 ) , s φ2 = Sφ2 (1 + e1 )

Such that E(e 0 ) = E(e1 ) = 0

and E e 02 = ( ) (δ 40 − 1)
n
, ( )
E e12 =
(δ 04 − 1)
n
, E (e 0 e1 ) =
(δ 22 − 1)
n
,

N
∑ (y i − Y ) (φ i − P )
p q
μ pq
μ pq = i =1
where δ pq =
(μ p20/ 2 μ q02/ 2 ) ,
(N − 1)
.

μ 40 μ
β 2( y ) = 2
= δ 40 and β 2(φ ) = 04
2
= δ 04
μ 02 μ 02

Let β*2( y ) = β 2( y ) − 1, β*2(φ ) = β 2( x ) − 1, and δ*pq = δ pq − 1

P is the proportions of units in the population.

Now the estimator t 1 defined in (2.1) can be written as

(t 1 − S 2y ) = S 2y (e 0 − e1 + e12 − e 0 e1 ) (2.4)

Similarly, the estimator t 2 can be written as


( )
t 2 − S 2y = S 2y e 0 − bS φ2 e1 (2.5)
And the estimator t 3 can be written as
⎛ e e e 3e 2 ⎞
( )
t 3 − S 2y = S 2y ⎜⎜ e 0 − 1 − 0 1 + 1 ⎟⎟
2 2 8 ⎠
(2.6)

The MSE of t 1 , t 3 and variance of t 2 are given, respectively, as

65
S 4y
MSE(t p1 ) =
n
[β ( ) + β ( ) − 2δ ]
*
2 y
*

*
22 (2.7)

S 4y ⎡ * β*2(φ ) ⎤
MSE(t p3 ) = ⎢β 2( y ) + − δ *22 ⎥ (2.8)
n ⎢⎣ 4 ⎥⎦

The variance of t p 2 is given as

V (t 2 ) =
1 4
n
[
S y (λ 40 − 1) + b 2S φ2 (λ 04 − 1) − 2bS y2 S 2x (λ 22 − 1) ] (2.9)

On differentiating (2.9) with respect to b and equating to zero we obtain

S 2y (δ 22 − 1)
b= (2.10)
S 2x (δ 04 − 1)

Substituting the optimum value of b in (2.9), we get the minimum variance of the

estimator t 2 , as

min .V(t 2 ) =
S 4y
β *
2( y )
⎡ δ *222 ⎤ )2
( )(
2
⎢1 − * * ⎥ = Var S 1 − ρ (S2y ,Sφ2 ) ) (2.11)
n ⎣⎢ β 2( y )β 2(φ ) ⎦⎥

3. Adapted estimator

We adapt the Shabbir and Gupta (2007) and Grover (2010) estimator, to the case when

one of the variables is in the form of attribute and propose the estimator t4

⎛ S2 − s 2 ⎞
t4 = [
k 1s 2y + k 2 S φ2 ( − s φ2 )]
exp⎜
φ φ⎟
⎜ S2 + s 2 ⎟
(3.1)
⎝ φ φ⎠

where k 1 and k 2 are suitably chosen constants.

Expressing equation (3.1) in terms of e’s and retaining only terms up to second degree of

e’s, we have:

[ ⎡ e 3 ⎤
t 4 = k1S 2y (1 + e 0 ) − k 2 s φ2 e1 ⎢1 − 1 + e12 ⎥
⎣ 2 8 ⎦
] (3.2)

66
Up to first order of approximation, the mean square error of t 4 is

(
MSE(t 4 ) = E t 4 − S 2y )2
[ ( ) ⎛

3 ⎞
= S 4y (k 1 − 1) + λk 12 β *2 (y ) + β *2 (φ) − 2δ 22 + λk 1 ⎜ δ *22 − β *2 (φ)⎟
2

4 ⎠
⎡ k ⎤⎤
( )
+ S φ4 k 22 λβ *2 (φ) + 2λS 2y S 2x ⎢k 1 k 2 β *2 (x ) − δ *22 − 2 β*2 (x )⎥ ⎥
2
(3.3)
⎣ ⎦⎦
1
where, λ =
n
On partially differentiating (3.3) with respect to k i (i = 1,2 ) ,we get optimum values of k 1

and k 2 , respectively as

⎛ λ ⎞
β*2 (φ)⎜ 2 − β*2 (φ)⎟
⎝ 4 ⎠
k1* =
(
2 β 2 (φ)(λA + 1) − λB 2
*
) (3.4)

and

⎡ ⎛ λ ⎞⎤
S 2y ⎢β*2 (φ)(λA + 1) − λB 2 − B⎜ 2 − β*2 (φ)⎟⎥
⎝ 4 ⎠⎦
k *2 = ⎣
(
2S 2x β 2x (φ)(λA + 1) − λB 2 ) (3.5)

where,

A = β*2 (y ) + β*2 (φ) − 2δ *22 and B = β*2 (φ) − δ*22 .

On substituting these optimum values of k 1 and k 2 in (3.3), we get the minimum value

of MSE of t 4 as


⎜ λS 4y β*2 (φ) ⎞⎟
λβ 2 (x ) MSE ( t 2 ) +
*
⎜ 16 ⎟
MSE ( t 2 ) ⎝ ⎠
MSE ( t 4 ) = − (3.6)
MSE ( t 2 ) ⎛ MSE (t ) ⎞
1+
S 4y 4⎜1 + 2 ⎟
⎜ 4
Sy ⎟
⎝ ⎠

67
4. Efficiency Comparison

First we have compared the efficiency of proposed estimator under optimum condition

with the usual estimator as -

λS 4y δ *222
( )
2
V Ŝ − MSE Ŝ
y ( )
2
p opt =
β *

MSE ( t 2 )
MSE ( t 2 )
2(x ) 1+
S 4y
⎛ λS 4y β *2 (φ) ⎞
λβ (x ) MSE ( t 2 ) +

* ⎟

2
16 ⎟
+ ⎝ ⎠ ≥ 0 always. (4.1)
⎛ MSE (t 2 ) ⎞
4⎜1 + ⎟
⎜ S 4 ⎟
⎝ y ⎠

Next we have compared the efficiency of proposed estimator under optimum condition

with the ratio estimator as -

From (2.1) and (3.6) we have

2
⎡ δ* ⎤
( )
MSE(t 2 ) − MSE Ŝ 2p = λS 4y ⎢ β 2( x ) − 22 ⎥ −
MSE( t 2 )
⎢ β*2(x ) ⎥⎦ 1 + MSE( t 2 )
opt

S 4y
⎛ λS 4y β *2 (φ) ⎞
λβ *2 (x )⎜ MSE( t 2 ) + ⎟
⎜ 16 ⎟
+ ⎝ ⎠ ≥ 0 always. (4.2)
⎛ MSE(t 2 ) ⎞
4⎜1 + ⎟
⎜ S 4 ⎟
⎝ y ⎠
Next we have compared the efficiency of proposed estimator under optimum condition

with the exponential ratio estimator as -

From (2.3) and (3.6) we have

68
2
⎡ δ *22 ⎤⎥
( )
MSE(t 3 ) − MSE Ŝ 2p 4⎢
= λS y β 2 ( x ) − −
MSE( t 2 )
⎢ 2 β*2 (x ) ⎥⎦ 1 + MSE ( t 2 )
opt

S 4y
⎛ λS 4y β*2 (φ) ⎞
λβ (x )⎜ MSE( t 2 ) +
* ⎟
2 ⎜ 16 ⎟
+ ⎝ ⎠ ≥ 0 always. (4.3)
⎛ MSE(t 2 ) ⎞
4⎜1 + ⎟
⎜ S 4 ⎟
⎝ y ⎠
Finally we have compared the efficiency of proposed estimator under optimum condition
with the Regression estimator as -

⎜ λS 4y β*2 (φ) ⎞⎟
λβ 2 (x ) MSE ( t 2 ) +
*
⎜ 16 ⎟
MSE ( t 2 ) ⎝ ⎠ > 0 always.
MSE (t 2 ) − MSE ( t 4 ) = −
MSE ( t 2 ) ⎛ MSE (t ) ⎞
1+
4 4⎜1 + 2 ⎟
Sy ⎜ Sy4 ⎟
⎝ ⎠
(4.4)
5. Empirical study

We have used the data given in Sukhatme and Sukhatme (1970), p. 256.
Where, Y=Number of villages in the circle, and
φ Represent a circle consisting more than five villages.

n N S 2y S 2p λ 40 λ 04 λ 22
23 89 4.074 0.110 3.811 6.162 3.996

The following table shows PRE of different estimator’s w. r. t. to usual estimator.

Table 1: PRE of different estimators

Estimators t0 t1 t2 t3 t4
PRE 100 141.898 262.187 254.274 296.016

Conclusion

Superiority of the proposed estimator is established theoretically by the

universally true conditions derived in Sections 4. Results in Table 1 confirms this

superiority numerically using the previously used data set.

69
References

Abd-Elfattah, A.M. El-Sherpieny, E.A. Mohamed, S.M. Abdou, O. F. (2010): Improvement in


estimating the population mean in simple random sampling using information on
auxiliary attribute. Appl. Mathe. and Compt.
Das, A. K., Tripathi, T. P. (1978). Use of auxiliary information in estimating the finite population
variance. Sankhya 40:139–148.
Grover, L.K. (2010): A Correction Note on Improvement in Variance Estimation Using Auxiliary
Information. Communications in Statistics—Theory and Methods, 39: 753–764,
2010
Kadilar, C., Cingi, H. (2006). Improvement in variance estimation using auxiliary information.
Hacettepe J. Math. Statist. 35(1):111–115.
Kadilar, C., Cingi, H. (2007). Improvement in variance estimation in simple random sampling.
Commun. Statist. Theor. Meth. 36:2075–2081.
Isaki, C. T. (1983). Variance estimation using auxiliary information, Jour. of Amer.
Statist.Asso.78, 117–123, 1983.
Jhajj, H.S., Sharma, M.K. and Grover, L.K. (2006). A family of estimators of Population
mean using information on auxiliary attribute. Pak. J. Statist., 22(1),43-50.
Naik, V.D., Gupta, P.C. (1996): A note on estimation of mean with known population of an
auxiliary character, Journal of Ind. Soci. Agri. Statist. 48(2) 151–158.
Prasad, B., Singh, H. P. (1990). Some improved ratio-type estimators of finite population
variance in sample surveys. Commun. Statist. Theor. Meth. 19:1127–1139
Singh, R. Chauhan, P. Sawan, N. Smarandache, F. (2007): A general family of estimators for
estimating population variance using known value of some population
parameter(s). Renaissance High Press.
Singh, R. Chauhan, P. Sawan, N. Smarandache, F. (2008): Ratio estimators in simple random
sampling using information on auxiliary attribute. Pak. J. Stat. Oper. Res. 4(1)
47–53
Shabbir, J., Gupta, S. (2007): On estimating the finite population mean with known population
proportion of an auxiliary variable. Pak. Jour. of Statist.
23 (1) 1–9.
Shabbir, J., Gupta, S. (2007). On improvement in variance estimation using auxiliary information.
Commun. Statist. Theor. Meth. 36(12):2177–2185.

70

You might also like