Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Some Properties of Entropy For The Exponentiated Pareto Distribution (EPD) Based On Order Statistics

Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

Journal of Mathematical Extension

Vol. 3, No. 1 (2008), 43-53

Some Properties of Entropy for the


Exponentiated Pareto Distribution (EPD)
Based on Order Statistics

J. Behboodian
Islamic Azad University-Shiraz Branch

S. Tahmasebi
Shiraz University

Abstract. In this paper, we derived the exact form of the entropy


for Exponentiated Pareto Distribution (EPD). Some properties of
the entropy and mutual information are presented for order statis-
tics of EPD. Also, the bounds are computed for the entropies of
the sample minimum and maximum for EPD.

AMS Subject Classification: 94A17; 60E05


Keywords and Phrases: Differential entropy, entropy bounds,
exponentiated Pareto Distribution, order Statistics, mutual infor-
mation.

1. Introduction

We will first introduce the concept of differential entropy which is the

entropy of a continuous random variable. Let X be a random variable

with cumulative distribution function FX (x) = P (X 6 x) and density

43
44 J. BEHBOODIAN AND S. TAHMASEBI

fX (x) = FX0 (x) . The differential entropy H(X) of a continuous random

variable X with a density fX (x) is defined as


Z ∞ Z 1
H(X) = − fX (x) log fX (x)dx = − log fX (FX−1 (u))du, (1)
−∞ 0

where u = FX (x) . Now, let us consider the exponentiated Pareto

distribution (EPD) with probability density function (pdf)


h iθ−1
fX (x) = θλ 1 − (x + 1)−λ (x + 1)−(λ+1) , x > 0, λ > 0, θ > 0,

(2)

and cumulative distribution function (cdf)


h iθ
FX (x) = 1 − (x + 1)−λ , x > 0, λ > 0, θ > 0, (3)

where θ and λ are two shape parameters. When θ = 1, the above dis-

tribution corresponds to the standard Pareto distribution of the second

kind.

Analytical expression for the entropy of univariate continuous dis-

tributions are discussed by Cover and Thomas [3], Lazo and Rathie [8],

Nadarajah and Zagrafos [9]. Also, the information properties of order

statistics have been studied by a few authors. Among them Wong and

Chen [15], Park [10], Ebrahimi et al. [5] provided several results and

some characterizations of shannon entropy for order statistics.

The rest of this paper is organized as follows. In Section 2, we de-

rived the exact form of the entropy for exponentiated pareto distribution
SOME PROPERTIES OF ENTROPY FOR THE EXPONENTIATED... 45

(EPD). In Section 3, we present shannon entropy for jth order statis-

tic of EPD, some properties of the entropy, and mutual information for

order statistics of EPD.

2. Entropy for EPD

Suppose X is a random variable with EP (θ, λ) and density function

(2). Now using (1), the log-density of (2) is

³ ´
log fX (x) = log(θλ)+(θ−1) log 1 − (x + 1)−λ −(λ+1) log(x+1), (4)

and the entropy is

H(X) = E (− log fX (x))


Z ∞
= − fX (x) log fX (x)dx
0 h ³ ´i
= − log(θλ) + (1 − θ)E log 1 − (x + 1)−λ

+ (λ + 1)E [log(x + 1)] . (5)

£ ¡ ¢¤
So, we need to find E log 1 − (x + 1)−λ and E [log(x + 1)]to obtain

the Shannon entropy.

Derivation of these two expectations are based on the following strategy:

Z ∞ h iθ−1
r
k(r) = E [(x + 1) ] = θλ(x + 1)r−(λ+1) 1 − (x + 1)−λ dx. (6)
0
46 J. BEHBOODIAN AND S. TAHMASEBI

£ ¤
By the change of variable 1 − (x + 1)−λ = t, 0 < t < 1, we obtain:

k(r) = E [(x + 1)r ]


Z 1
r
= θ(t)θ−1 (1 − t)− λ dx
0
Γ(θ)Γ(1 − λr ) r
= θ. , 1− 6= 0, −1, −2, ... . (7)
Γ(θ + 1 − λr ) λ

Differentiating both sides of (7) with respect to r we obtain:

kŕ = E [(x + 1)r log(x + 1)]


£ ¤
θΓ(θ) −1 0 r r 1 r 0 r
λ Γ (1 − λ )Γ(θ + 1 − λ ) + λ Γ(1 − λ )Γ (θ + 1 − λ )
= ¡ ¢2 (8)
Γ(θ + 1 − λr )
From relation (8), at r = 0 we obtain

1
E [log (x + 1)] =
[ψ(θ + 1) − ψ(1)] , (9)
λ
d
where ψ is the digamma function defined by ψ(θ) = ln Γ(θ).

Now we calculate
h³ ´r i
t(r) = E 1 − (x + 1)−λ
Z ∞ ³ ´r+θ−1
= θλ 1 − (x + 1)−λ (x + 1)−(λ+1) dx
0
θ
= (10)
θ+r
¯ h ³ ´i −1
dt(r) ¯¯
¯ = E log 1 − (x + 1)−λ = . (11)
dr r=0 θ
Putting (9) and (11) in relation (5) we have:

λ+1 −1 + θ
H(X) = − log (λθ) + [ψ(θ + 1) − ψ(1)] + , (12)
λ θ

where −ψ(1) = 0.5772... is the Euler constant.


SOME PROPERTIES OF ENTROPY FOR THE EXPONENTIATED... 47

3. Some Properties of Entropy Based on Order


Statistics EPD

Let X1 , ..., Xn be a random sample from a distribution FX (x) with den-

sity fX (x) > 0. The order statistics of this sample is defined by the

arrangement of X1 , ..., Xn from the smallest to the largest, by Y1 <

Y2 < ... < Yn . The density of Yj , j = 1, ..., n, is

n!
fYj (y) = fX (y) [FX (y)]j−1 [1 − FX (y)]n−j . (13)
(j − 1)!(n − j)!

Now, let U1 , U2 , ..., Un be a random sample from U (0, 1) with the order

statistics W1 < W2 < ... < Wn . The density of Wj , j = 1, ..., n, is

1
fWj (w) = wj−1 [1 − w]n−j , 0 < w < 1, (14)
B(j, n − j + 1)

Γ(j)Γ(n − j + 1) (j − 1)!(n − j)!


where B(j, n − j + 1) = = .
Γ(n + 1) n!

The entropy of the beta distribution is

Hn (Wj ) = −(j − 1) [ψ(j) − ψ(n + 1)] − (n − j)

[ψ(n + 1 − j) − ψ(n + 1)] + log B(j, n − j + 1),

d log Γ(t) 1
where ψ(t) = , ψ(n + 1) = ψ(n) + .
dt n
Using the fact that Wj = FX (Yj ) and Yj = FX−1 (Wj ), j = 1, 2, ..., n,

are one to one transformations, the entropies of order statistics can be

computed by
48 J. BEHBOODIAN AND S. TAHMASEBI

£ ¤
H(Yj ) = Hn (Wj ) − Egj log fX (FX−1 (Wj )) (15)
Z
= Hn (Wj ) − fj (y) log fX (y)dy, (16)

Now, we can have an application of (16) for the EPD. Let X be a random

variable having the EPD(θ, λ). For computing H(Yj ), we have

  1
1 −λ
FX−1 (Wj ) = 1 − (Wj ) θ  − 1,

and the expectation term in (15) is obtained as follows:

λ+1 1
Egj [log fX (FX−1 (Wj ))] = Egj [log(θλ) + log(1 − (Wj ) θ ) +
λ
1
(θ − 1) log((Wj ) θ )]
λ+1 1
= log(θλ) + Egj [log(1 − (Wj ) θ )]
λ
+θEgj [log(Wj )]
λ+1 n!
= log(θλ) + [
λ (j − 1)!
n−j
X (−1)k (ψ(1) − ψ(θk + jθ + 1))
]
k!(n − j − k)!(k + j)
k=0
θ−1
+ (ψ(j) − ψ(n + 1)). (17)
θ

Therefor, by (15) and (17) the entropy of j th order statistic is

λ+1
H(Yj ) = Hn (Wj ) − log(θλ) +
λ
" n−j
#
n! X (−1) (−ψ(1) + ψ(θk + jθ + 1))
k
.
(j − 1)! k!(n − j − k)!(k + j)
k=0
1 − theta
+ (ψ(j) − ψ(n + 1)). (18)
θ
SOME PROPERTIES OF ENTROPY FOR THE EXPONENTIATED... 49

1
For the sample minimum j = 1, Hn (W1 ) = 1 − log n − and
n
1
H(Y1 ) = 1 − log n − − log(θλ) +
" nn #
X µ ¶
λ+1 u−1 n
× (−1) (ψ(θu + 1) + γ)
λ u
u=1
µ ¶
θ−1 1
+ ψ(n) + + γ , (19)
θ n

where γ = −ψ(1) = 0.5772... is the Euler constant.


£ ¤nθ
The distribution function of Yn is Fn (y) = 1 − (y + 1)−λ I(0, ∞)(y)
£ ¤nθ−1
and the density is fn (y) = nθλ 1 − (y + 1)−λ (y+1)−(λ+1) I(0, ∞)(y) .
1
Noting that Hn (Wn ) = 1 − log n − , the formula (18) gives
n
1
H(Yn ) = 1 − log n − − log(θλ) +
n
λ+1 θ−1 1
[ψ(nθ + 1) + γ] + ( ). (20)
λ θ n

For any random variable X with H(X) < ∞, Ebrahimi et al.[10] showed

that the entropy of order statistics Yj , j = 1, 2, ..., n, is bounded as

follow:

H(Yj ) > Hn (Wj ) − log M, (21)

and

H(Yj ) 6 Hn (Wj ) − log M + nBj (H(X) + log M ), (22)

where M is the mode of the distribution and Bj denotes the j th term


j−1
of binomial probability Bin(n − 1, ). Therefore, we can compute
n−1
50 J. BEHBOODIAN AND S. TAHMASEBI

the bounds for the entropies of the sample minimum and maximum for
µ ¶1
λθ + 1 λ
EPD with parameters λ and θ. We have M = − 1. So,
λ+1
 
µ ¶1
1  λθ + 1 λ 
1 − log n − − log  − 1 6 H(Y1 ), (23)
n λ+1

and
 
µ ¶1
1  λθ + 1 λ 
H(Yn ) 6 1 − log n − + (n − 1) log  − 1
n λ+1

(24)
µ ¶
λ+1 1+θ
+n − log (λθ) + [ψ(θ + 1) − ψ(1)] + .
λ θ

The lower bound and upper bound for the entropies of the sample min-

imum and maximum for EPD are useful when n is small.

Information theory provides some concepts of extensive use in statis-

tics, one of which is mutual information of two random variables. It is a

generalization of the coefficient of determination, for a bivariate random

vector (X, Y ) with joint density function f (x, y) and marginal density

functions, fY (y) and fX (x). The mutual information is defined as


Z
f (x, y)
I(X, Y ) = f (x, y) log dxdy
S fY (y)fX (x)

= H(X) + H(Y ) − H(X, Y ), (25)

where S is the region f (x, y) > 0 and H(X, Y ) is the entropy of (X, Y ).

Mutual information for order statistics have an important role in statis-


SOME PROPERTIES OF ENTROPY FOR THE EXPONENTIATED... 51

tical sciences. In view of Ebrahimi et al. [10], the degree of dependency

among Y1 , ..., Yn is measured by the mutual information between con-

secutive order statistics, defined by


µ ¶
n
In (Yj , Yj+1 ) = − log + nψ(n) − jψ(j) − (n − j)ψ(n − j) − 1. (26)
j

For given n, In (Yj , Yj+1 ) is symmetric in j and n − j ; increases in


n n
j for j < , and decreases for j > . In (Yj , Yj+1 ) is increasing in n.
2 2
Thus, In (Yj , Yj+1 ) is maximum at the median and is symmetric about

the median. Now, suppose Y1 , ..., Yn denote the order statistics of a

random sample X1 , ..., Xn from EPD,Then we can calculate the mutual

information between Y1 and Yn . Thus, we have

I(Y1 , Yn ) = H(Y1 ) + H(Yn ) − H(Y1 , Yn )


(n − 2) 1
= − log n(n − 1) − + 4(1 − ) − 2 − 2 log(θλ)
n(n − 1)2 n
" n µ ¶ #
λ+1 X n
+ (−1)u−1 (ψ(θu + 1) + γ) + ψ(nθ + 1) + γ
λ u
u=1
µ ¶
θ−1 2
+ ψ(n) + + γ . (27)
θ n

Noting that H(Y1 , Yn ) can be computed by


Z ∞Z z
−fY1 ,Yn (y, z) log fY1 ,Yn (y, z)dydz.
0 0

Conclusion

We have derived the exact form of shannon entropy for the Exponen-

tiated Pareto Distribution(EPD) and its order statistics. This distribu-


52 J. BEHBOODIAN AND S. TAHMASEBI

tion is applied in reliability, actuarial sciences, economics, and telecom-

munications. We have also presented some properties of the entropy and

mutual information for order statistics of EPD.

Acknowledgements

We would like to thank Executive Editor and two referees for their

careful reading of our manuscript and helpful suggestions which im-

proved the paper.

References
[1] B. C. Arnold, N. Balakrishnan, and H. N. Nagaraja, A first course in
order statistics, Wiley, New York, 1991.

[2] N. Ahmed and D. V. Gokhale, Entropy expressions and their estimators


for multivariate distributions. IEEE Trans. Inf. Theory IT, 35 (1989),
688-692.

[3] T. M. Cover and J. A. Thomas, Elements of information theory, Wiley,


New York, 1991.

[4] G. A. Darbellay and I. Vajdo, Entropy expressions for multivariate con-


tinuous distribution, IEEE inf. Theory IT, 46 (2000), 709-712.

[5] N. Ebrahimi, E. S. Soofi, and H. Zahedi, Information properties of order


statistics and spacing, IEEE Transactions on Information Theory IT, 50
(2004), 177-183.

[6] N. L. Johnson, S. Kotz, and N. Balakrishnan, Continuous univariate dis-


tributions, Vol. 1, 2nd Edition, Wiley, New York, 1994.

[7] J. N. Kapur, Measure of information and their applications, New York,


John Wiley, 1994.

[8] A. C. Laz, P. N. Rathie, On the entropy of continuous probability distribu-


tions, IEEE Transactions of Information Theory IT, 24 (1978), 120-122.
SOME PROPERTIES OF ENTROPY FOR THE EXPONENTIATED... 53

[9] S. Nadarajah and K. Zagrafos, Formulas for Renyi information and related
measures for univaraite distributions, Information Sciences, 155 (2003),
119-138.

[10] S. Park, The entropy of consecutive order statistics, IEEE Transactions


of Information Theory IT, 41 (1995), 2003-2007.

[11] S. Park, Fisher information on order statistics, Journal of American Sta-


tistical Association, 91 (1996), 385-390.

[12] A. Renyi, On measures of entropy and information, Proc. 4th Berkeley


Symposium, Stat. Probability, 1 (1961), 547-561.

[13] C. E. Shannon, A mathematical theory of communication, Bell System


Technical Journal, 27 (1948), 379-432.

[14] E. S. Soofi, Principal information theoritic approaches, Journal of the


American Statistical Association, 95 (2000), 1349-1353.

[15] K. M. Wong and S. Chen, The entropy of ordered sequances and order
statistics, IEEE Transactions of Information Theory IT, 36 (1990), 276-
284.

Javad Behboodian
Department of Mathematics
Islamic Azad University - Shiraz Branch
Shiraz, Iran.
E-mail: behboodian@susc.ac.ir

Saeid Tahmasebi
Department of Mathematics and Statistics
Shiraz University
Shiraz 71454, Iran
E-mail: Stahmasby@yahoo.com

You might also like