Journal of Statistical Theory and Applications

Volume 17, Issue 4, December 2018, Pages 703 - 718

Weighted Entropy Measure: A New Measure of Information with its Properties in Reliability Theory and Stochastic Orders

Authors
M. Ramadan1
1Department of Statistics, Mathematics and Insurance, College of Commerce, Benha University, Egypt.
*

Corresponding author. Email: mramadan@benha-univ.edu.eg

Received 25 April 2017, Accepted 6 May 2018, Available Online 31 December 2018.
DOI
10.2991/jsta.2018.17.4.11How to use a DOI?
Keywords
Shannon information; mean residual life; mean reversed life
Abstract

The weighted entropy measure is a germane dynamic measure of uncertainty in reliability and survival studies. In this paper, the new results of weighted entropies with some characterizations are provided. Furthermore, we have presented some results for weighted entropy residual and weighted past residual of order statistics with some application of some reliability systems such as a series structure and a parallel structure. In addition, we introduced the lower bound for the weighted residual (past) entropy. Moreover, the stochastic orders based on weighted entropy are presented. Finally, we illustrate the usefulness of the proposed non-parametric estimators of weighted entropy by application to real data.

Copyright
© 2018 The Authors. Published by Atlantis Press SARL.
Open Access
This is an open access article under the CC BY-NC license (http://creativecommons.org/licences/by-nc/4.0/).

1. INTRODUCTION

The weighted distributions have been utilized in many applications such as distributions theory, reliability, probability, ecology, bio-statistics and applied.

Consider the distribution function G. for a random variable Y0 with density function g.. Suppose

lY=infy1:Gy>0, uY=supy1:Gy<1, SY=lY,uY and w.R+ be a weighted function. The weighted random variable YW, having probability density function as:

gw(y)=w(y)g(y)/E[w(y)],y,
where Ewy+. Let Y represents the life length of a “unit” in reliability studies, and life distribution, with survival function G¯Y, hazard rate function φ¯G.=gY./G¯Y., reversed hazard rate φG.=gY./GY., the geometric vitality function ϑY=Eln Y|Y>0 and mean revered residual lifetime as
θ(t)=E[κY|Yκ]=0κGY(u)duGY(κ),κR+.
As reported by Ebrahimi and Pellery [1] and Asha and Rejeesh [2], the differential entropy HY demonstrate the expected uncertainty of gy. In addition, it measures how the distribution spreads over its domain, where there is an inverse relationship between the value of HY and concentration of the probability mass of Y. HY sometimes called a dynamic measure of uncertainty or Shannon information measure.

The differential entropy of random variable Y can be defined in the continuous case as follows:

HY=E[lngY(Y)]=0gY(u)lngY(u)du.
Khinchin (1957) generalized Eq. (3) as
HYϕ=EϕgYy=0gYuϕgYudu.
Di Crescenzo and Longobardi [4] developed the following convex entropy measure:
HwY=0vgYvlngYvdv,
or equivalently:
Hw(Y)=0dyygY(v)lngY(v)dv.
The uncertainty of the residual lifetime is discussed in Di Crescenzo and Longobardi [5], with the following measure:
HY,κ=1Elnφ¯GY|Y>κ=κgYvG¯vlngYvG¯vdv,κ+,
where κA=y+|G¯Yy>0. In addition, the past entropy has been widely researched. We can measure it as follows:
H¯GY,κ=1ElnφGY|Y<κ=0κgYvGvlngYvGvdv,κ+.
Di Crescenzo and Longobardi [4] defined the convex residual entropy as
HwY,κ=κygYyG¯κlnygYyG¯κdy,κ+.
Furthermore, let Y1 and Y2 be two random variables with distribution functions G1. and G2., densities functions gY1. and gY2. and survival functions G¯1. and G¯2. respectively. Kullback and Leibler [6] introduced an information distance between two distributions G1 and G2 as follows:
IY1,Y2=0gY1ulngY1ugY2udu.

In addition, Ebrahimi and Kirmani [7] have demonstrated that the Kullback-Leibler discrimination information of Y1 and Y2 at time κ can be presented as

IRY1,Y2(κ)=κgY1(u)G¯1(κ)lngY1(u)/G¯1(κ)gY(u)/G¯2(κ)du.
We can use Eq. (5) to distinguish between two residual lifetimes those have both survived up to time κ, where IRY1,Y2κ identifies with the relative entropy of Y1κ|Y1κ and Y2κ|Y2κ.

The purpose of this study is to develop and add more properties, characterizations, order statistics, some inequalities and stochastic orders of weighted differential entropies measures. In Section 2, definitions, notation, basic properties and characterizations are illustrated. The weighted entropy (residual and past residual) of order statistics with some application of reliability systems such as a series structure and a parallel structure are given in Section 3. In addition, we provided the lower bound for the weighted residual (past) entropy. The stochastic orders based on weighted entropy are developed in Section 4. Lastly, in Section 5, the suggested estimators of weighted entropy are presented. Furthermore, we illustrate the usefulness of the proposed non-parametric estimators of weighted entropy by application to real data.

Throughout this article, the term entropy is used instead of differential entropy and using abbreviation PL for past lifetime, WPE for weighted past residual entropy, WRE for weighted residual entropy, SS for the series system, PS for the parallel system, SE for small than or equal.

2. THE WEIGHTED DIFFERENTIAL ENTROPY

The weighted differential entropy WDE defined by Das [8] for random variable Y with weighted function wx=x as:

ξw(Y)=ϑw(Y)+xfY(x)lnfY(x)dxE[Y]lnE[Y]E[Y],=Hw(Y) E[Y] +lnE[Y] ϑw(Y) E[Y] ,
where
ϑw(Y)=xfY(x)lnxdx=:E(YlnY).
whenever the integral uθfYu/EYθ1|lnuθfYu/EYθ|du<.

As a general case, we can be defined the generalized the WDE as the following definition:

Definition 2.1.

Given a function ywy0, and an RV Y:, with a probability density function gY., survival function G¯. and mean EY. Therefore, the weighted differential entropy with weighted function wy=yθ is defined as

ξwY=ElngYwY=gYwuln1gYwudu,=uθgYuEYθlnuθgYuEYθdu,=1EYθθuθgYuln udu+uθgYulngYuduEYθlnEYθ.
Now, let Y1,Y2,,Yn be a sample from the distribution F and n3. By using Vasicek [9], express Eq. (6) can be rewritten as
ξw(Y)=01ln{uF1(u)}du.
In addition, Das [8] have defined the weighted residual entropy as
ξw(Y,κ)=κgYw(v)G¯w(κ)ln gYw(v)G¯w(κ)dv,=1E[Y|Y>κ]κvgY(v)G¯(κ)ln vgY(v)E[Y|Y>κ]G¯(κ)dv,κ+.
If fXx is the actual density function of random variable X and gYx is the density function determined by the researcher. Therefore, the weighted inaccuracy measure can be defined as
Rw(X,Y)=xfX(x)lngY(x)dx.
Next, we define the relative WDE of two densities.

Definition 2.2.

Let X and Y be two random variables with density function, sfXws0 and sgYs0, and mean values EX. and EY., respectively. Therefore, the relative weighted differential entropy of gYx relative to fwx can be defined as

XwY=ElnsfXsEXsgYs,=vfXvEXXlnvfXvEXvgYvdv.
By using Eq. (4), we can define an alternative formulas of XwY as follows
XwY=ln1EXXHwXEXXxfXxEXxln xgYxdx.
Note that when gYxfXx, then we have
XwX=ln1EXXHwXEXXxfXxEXxln xfXxdx,=lnexpϑwX+2HwXEXX/EXX.

Remark 2.1.

By using Eqs. (8) and (9) we get the following relation

XwYw=lnEYXEXXHwXEXX+XwX,YEXX.
Furthermore, we can define the divergence between fwx and gYx as follows
K(Xw,Y)=(XwY) +w(YX) ,=(fXw(x) gY(x) ) lnfXw(x) gY(x) dx,
it is a measure of the difficulty of discrimination between them.

Now, let X be RV with beta distribution as follows:

fXs=sα11sβ1/Bα,β,s0,1.
By using Eq. (3), we get that HX satisfy the following equation:
HX=ln(B(α,b)exp((α+β2)Ψ(α+β))exp((α1)Ψ(α)+(β1)Ψ(β))),
where B.,. is beta function and Ψ. is psi function.

The following theorem states that this relationship actually characterizes the beta distribution.

Characterization Theorem 2.1:

Any random variable Y with distribution function K, density function fYx, mean EY, mode ΓKY, geometric mean GY, entropy function HY and weighted differential entropy ξwY satisfying the following relationship:

ξwY=HYα1ΓKY+EYΓKY+lnEYlnGY+EY1/α,
is either degenerate or Y has a beta distribution. Indeed, the degenerate case should be subsumed in the beta distribution with α,βR+.

Proof.

By using the following integral formula which is taken from Gradshteyn and Ryzhik ([10], formula 4.253(1), pp. 538):

yθ11ycλ1ln ydy=1c2Bθc,λΨθcΨθc+λ,
provided that Reθ>0, Reλ>0, c>0. We denote the beta function with the symbol B(·,·) and the digamma function with Ψ(·). Therefore, we have
ϑw(Y)=xfX(x)ln xdx,=1B(α,β)[B(α+1,β)(Ψ(α+1)Ψ(α+β+1))],=E[Y](Ψ(α+1)Ψ(α+β+1)).
By using Example 2.3 in Di Crescenzo and Longobard ([4], pp.682), Eq. (4) and the recurrence relation of the digamma function, we can rewrite HwY as follows:
Hw(Y)=E[Y][lnB(α,β)+(1α)(Ψ(α)+1α)+(α+β2)(Ψ(α+β)+1α+β)+Ψ(β)(1β)].
We can reduce Eq. (11) as,
Hw(Y)=E[Y] [HY+(1α) 1α+(α+β2) (α+β) ] ,=E[Y] [HY(α1) [ΓK(Y) +E[Y] ΓK(Y) ] ] .
This is true for fΓK.=max<x<fX.. Furthermore by Eqs. (6) and (10) we have
ξwY=HYα1ΓKY+EYΓKY+lnEYΨα+1Ψα+β+1,=HYα1ΓKY+EYΓKY+lnEYlnGY+1αEY1.
In next results we study the closure transformation property of the weighted entropy. We can now proceed analogously to Di Crescenzo and Longobardi [4] and introduce the following theorem.

Theorem 2.2.

Suppose U is RV with density function fU. and ψU is strongly convex, strictly increasing, continuous and differentiable function with derivative dduψu. Then

ξwψU=ξ1wU|ψ10Uψ1+Ewln|ddxψx|ψ10Uψ1|
where EwU=0vfUwvdv.

Proof.

From Eq. (6) we have

ξwψU=0fUwψ1u|dduψ1u|lnfUwψ1u|dduψ1u|du.
We will make the following assumptions:
  1. ψu is monotonically increasing in u.

  2. v=ψu,

Therefore, it clear that

ξwψU=ψ10ψ1fUwvlnfUwv|ddvψv|1dv.=ψ10ψ1fUwvln|ddvψv|dv+ξwU|ψ10Uψ1.
Hence the proof is completed.

Proposition 2.3.

Let ϕU=αUβ whereas α,β>0. From this we deduce that

ξwϕU=ln αβ+β1ϑwUEU+ξwU.

Proof.

From Eq. (6) we have

ξwϕU=0fwvln|αβvβ1|dv0fwvlnfwvdv,=ln αβ+β10fwvln vdv+ξwU.
By using Eq. (7), we get the required results.

From Definitions (2.1), it is easy to obtain the following characterizations:

Example 2.1:

Suppose U be a random variable having Log-Normal with the following density function

fUu=12πσuexpln uln μ2/2σ2,μ,σ,u>0,
with parameter μ,σ>0. From Eq. (7) we get,
ϑwU=0v12πσvexpln vln μ2/2σ2ln vdv.
Set u=lnvlnμ, then we have
ϑwU=22πσexp2ln μ0exp2uu22σ2du.
It is follows from Gradshteyn and Ryzhik ([10], formula 3.322(1)) that
aexpx24μbxdx=πμexpμb21Φbμ+a2μ,Reμ>0,a0.
Therefore,
ϑwU=exp2ln μ+σ21Φ21/2σ.
In addition,
Hw(U)=0v12πσvexp((lnvlnμ)2/2σ2)[ln2πσlnv(lnvlnμ)22σ2]dv,=ln2πσE[U]+ϑw(U)+0v12πσvexp((lnvlnμ)2/2σ2)(ln vln μ)22σ2dv,
with the same way, set u=ln vln μ and by using formula 3.462(1) in Gradshteyn and Ryzhik [10] we have
0v12πσvexp((ln vln μ) 2/2σ2) (ln vln μ) 22σ2dv,=(1σ2) (3/2) +32μ2πexp(σ24) D3(σ).
Therefore,
 Hw(X)=ln(2πσ) μexp(σ2/2) +exp(2(ln μ+σ2) ) (1Φ(21/2σ) )+(1σ2) (3/2) +32μ2πexp(σ24) D3(σ) ,=ln(2πσ) E[X] +ϑw(X) +σ32μ2πexp(σ24) D3(σ) ,
where Φ. is Error function, ϖμ,σ=μexpσ2/21 and Dxy=2x/2expy2/4Fx2,12,y22 is Parabolic cylinder function. We denote a confluent hypergeometric function of the first kind with the symbol F.,.,.. Further,
ξwU=ϖμ,σαμ,σln ϖμ,σϖμ,σexp2ln μ+σ21Φ21/2σ,
where αμ,σ=ln2πσμexpσ2/2+exp2ln μ+σ21Φ21/2σ+1σ23/2+32μ2πexpσ24D3σ.

Furthermore,

XwX=lnexpβ1μ,σ+2β2μ,σϖμ,σ/ϖμ,σ,
where:
  1. β1μ,σ=exp2ln μ+σ21Φ21/2σ;

  2. β2μ,σ=ln2πσEX+ϑwX+1σ23/2+32μ2πexpσ24D3σ

Example 2.2:

Let X be random variable having Chi distribution with density function

fXx=2π/2π/2γπΓπ/2xπ1expx2π/2γ2,γ,x>0,π is a positive integer, with parameter μ,γ>0. From Eq. (7) we have

ϑwX=2π/2π/2γπΓπ/20xπexpx2π/2γ2ln xdx,
take u=x2π/2γ2, then we get
ϑwX=γ1/2π1/2Γπ/20uπ20.5expulnu/π/2γ21/2du.
By using Gradshteyn and Ryzhik ([10], 4.352(1)) for evaluate the below formula, we have the following result:
ϑwX=γ1/2π1/2Γπ/2Γπ+12ψπ+12lnπ/2γ2.
In addition, by Eq. (4) we have
HwX=0x2π/2π/2γπΓπ/2xπ1expx2π/2γ2ln2π/2π/2γπΓπ/2+π1ln xx2π2γ2dx,
since EX=γ2πΓπ+12Γπ2. With this substitution we obtain
HwX=π2π1π10vπexpv2π/2γ2ln vdv+π30vπ+2expv2π/2γ2dv,
where π1=2π/2π/2/γπΓπ/2, π2=lnπ1γΓπ+12/Γπ22/π, π3=π/2γ2π1. Direct calculations give
HwX=γπ+1Γπ+122π/21/2Γπ/2ln2π/2π/2γπΓπ/2γ2πΓπ+12Γπ22π/2π/2π1γπΓπ/20xπexpx2π/2γ2ln xdx.
Again, we the same way and continuing the simplification, we can conclude that
HwX=γΓπ+12/Γπ/2ln2π/2π/2γπΓπ/22π+2π14π/2ψπ+12lnπ/2γ2π+12π/21/2.
Therefore,
ξwX=ln2/γπ2π14ψπ+12lnπ/2γ2+π+121lnπ/2+lnγΓπ+12ψπ+12+ln π/2γ2.
Continuing the simplification
ξwX=lnπγπ142π14ψπ+12lnπ/2γ2+π+121lnπ/2+lnΓπ+12ψπ+12.
Moreover,
XwX=πα1ψπ+12ln π/2γ2+2HwXα1lnα1,
where α1=γΓπ+12/Γπ22/π. Continuing the simplification
wXwX=4π+12lnπ/2γ2+2ln2+ψπ+12π2π+1lnΓπ+12Γπ2.

Example 2.3:

A random variable U has a Laplace α, if it has density function as follows

fUu=12αexpα|u|,α,>u>,
with parameter α>0. From Eq. (7) we obtain,
ϑwU=v12αexpα|v|ln vdv=ψ2ln α/α,
Furthermore, direct calculations give
HwU=0xαexpαxlnα2αxdx,=2ln α+ln 2αnats=1+HUαnats.

Since EU=0, we get that ξ1wU and 1wUwU can not be found.

3. CONNECTION TO RELIABILITY THEORY

Suppose U1,U2,,Un be i.i.d. lifetimes with probability density function g., distribution K., survival function K¯. and reversed hazard rate φK.. Therefore, the probability that any two (or more) observation in random sample take the same magnitude (the same value is equal to zero). Therefore, there exists a unique ordered arrangement of the sample observation according to magnitude. Let 0U1U2Un< be the corresponding order statistics. Therefore, Ur defines the lifetime of an (nr+1)out of n system. Write gr., Kr., φr. and φ¯r. as the distribution function, the probability density function, the RHR function of Ur and the hazard rate of Ur respectively. Then we have

g(r)(κ)=Cr[K(κ)]r1[K¯(κ)]nrg(κ),κR+,K(r)(κ)=i=rn(ni)[K(κ)]i[K¯(κ)]ni,φ(r)(κ)=CrφK(κ)βr/(i=rn(ni)βi),
and
φ¯rκ=grκ/K¯rκ,
where Cr=n!r1!nr! and βx=Kκ/K¯κx. The weighted residual entropy of order statistics Ur is given by
ξ1w(U(r) ,κ)=κg(r) w(u) K¯(r) w(κ) lng(r) w(u) K¯(r) w(κ) du,=1E[U(r) |U(r) >κ] κyg(r) (y) K¯(r) (κ) ×lnyg(r) (y) E[U(r) |U(r) >κ] K¯(r) (κ) dy.
Alternatively,
ξ1w(U(r),κ)=1E[U(r)|U(r)>κ]×κyg(r) (y) K¯(r) (κ) lnyφ¯(r) (y) K¯(r) (y) E[U(r) |U(r) >κ] K¯(r) (κ) dy,=ln[E[U(r) |U(r) >κ] K¯(r) (κ) ]1E[U(r) |U(r) >κ] κyg(r) (y) K¯(r) (κ) ln(yφ¯(r) (y) K¯(r) (y) ) dy.
Now, we can proceed analogously to treatment of the weighted entropy of the order statistics of PL as follows
ξ2wUr,κ=0κgrwvKrwκlngrwvKrwκdv,=1EUr|Ur<κ0κvgrvKrκlnvgrvEUr|Ur<κKrwκdv,
which is equivalent to
ξ2wUr,κ=1EUr|Ur<κ0κxgrxKrκlnxφrxK¯rxEUr|Ur<κKrwκdx.
Direct calculations give
ξ2w(U(r),κ)=ln[E[U(r)|U(r)<κ]K(r)(κ)]1EUr|Ur<κ0κvgrvKrwκκlnvφrvKrvdv,
for all κ0.

3.1. A Series Structure

It is to be noted that U1 represents an age of the series system. By using Eq. (13), with simple calculation, we have the weighted the residual entropy of U1 as

ξ1w(U(1),κ)=ln[E[U(1)|U(1)>κ]K¯(1)(κ)]1E[U(1) |U(1) >κ] κvg(1) (v) K¯(1) (κ) ln(vφ¯(1) (v) K¯(1) (v))dv,=ln[E[U(1) |U(1) >κ] K¯(1) (κ) ]1E[U(1) |U(1) <κ] κvn[K¯(v) ] n1g(v) [K¯(κ) ] nln(vφ¯(1) (v) [K¯(v) ] n) dv. 
It follows from Proposition 1 in Bairamov et al. [11], and definition of mean residual lifetime of nk+1-out-of-n system in Asadi and Bayramoglu [12] that
EU1|U1>κ=κK¯1vdvK¯1κ=M1κsay,
and
K¯κ=M10M1κexp0κM11κ1/n,
Then, Eq. (15) can be written in the following form
ξ1wU1,κ=lnM10exp0κM11κ+κvnK¯vn1gvK¯κnln vφ¯1vK¯vndvM10exp 0κM11κ.
Similarly, by Eq. (14), the WPE of U1 follows
ξ2w(U(1) ,κ)=ln[E[U(1) |U(1) <κ] K(1) (κ) ] 1E[U(1) |U(1) <κ] 0κvg(1) (v) K(1) (κ) ln(vφ(1) (v) K(1) (v)) dv,=ln[E[U(1) |U(1) <κ] K(1) (κ) ] 1E[U(1) |U(1) <κ] 0κvn[K¯(v) ] n1g(v) 1[K¯(κ) ] nln(vφ(1) (v) [1K¯n(v) ] ) dv.
Eq. (12) implies that
ξ2w(U(1),κ)=ln(E[U(1)|U(1)<κ](1K¯n(κ)))+0K¯κK¯1unun1)ln nK¯1ugK¯1uuun11unduEU1|U1>κ1K¯nκ.
According to Eqs. (45) in Tavangar and Asadi [13], the mean PL of series system can be obtained as follows:
P1(κ)=E[κU(1)|U(1)<κ],=l=1n(n(18) l) ακlSl(κ) l=1n(n(19) l) ακl,
where α.=K./K¯. and
Sπκ=0κl=1ππlKκvKκl1KκvKκπldv,
Equations (17) and (18) demonstrate that
ξ2wU1,κ=lnκP1κ1K¯nκ+0K¯κK¯1unun1)ln nK¯1ugK¯1uuun11unduκP1κ1K¯nκ.

3.2. A Parallel Structure

It is to be noted that Un refers to the lifetime of PS with survival function K¯n.=1Kn.. Based on Eq. (13), we can define the weighted the residual entropy of Un as

ξ1wUn,κ=lnEUn|Un>κ1Knκ1EUn|Un>κ1KnκκvnKn1vgvlnvnKn1vgvdv.
Applying Theorem 2.1, pp. 477 in Asadi and Bayramoglu [14] and Eq. (18), it obtains that,
EUn|Un>κ=Bnκ+κ,
where Bnκ is mean residual lifetime of PS, it can be found as follows
Bnκ=l=1n1nlαlκs=1n1s1nlsβsκl=1n1nlαlκ,
and βjκ=κK¯jvdvK¯jκ. Now, it is evident that
ξ1wUn,κ=lnBnκ+κ1KnκnBnκ+κ1KnκκvKn1vgvlnvnKn1vgvdv.
By using Eq. (4) in Asadi (2006, pp. 1200), we have the mean PL of the components of PS as follows
θnκ=EκUn|Unκ=0κKnvdvKnκ.
Putting r=n in Eq. (14) and using Eqs. (12 and 14), we get the WPE of Un as follows
ξ2wUn,κ=lnκθnκKnκ1κθnκ0κvgnvKnwκlnvφnvKnvdv,=lnκθnκKnκ1κθnκ0κvgnvKnκEUn|Un<κ)lnvnφKvKnvdv,
which is equivalent to,
ξ2wUn,κ=lnκθnκKnκnκθnκKnκκθnκ0κvKn1vgvlnvnφKvKnvdv,
for all κ0.

3.3. Some Inequalities

Next, we derive the upper bound of WRE of Ur. It is obvious that

ξ1wUr,κ=lnEUr|Ur>κ+lnK¯rκ 1EUr|Ur>κκvgrvK¯rκlnvφ¯rvK¯rvdv,
since lnEUr|Ur>κ0 and
κ0lnK¯rκ0,
by using Gupta et al. [16], we can deduce that
ξ1wUr,κlnEUr|Ur>κ.
For r=1, we have
ξ1wU1,κlnM1κ.
In addition, we know that lnBnκ+κ1Knκ0. Hence, we have
ξ1wUn,κlnBnκ+κ1Knκ.
In next result, we derive the lower bound for WPE of Un.

Proposition 3.1:

Suppose U0 be a random variable with distribution function Kv. Then

ξ2wUn,κn2Eu2K2n2ugu|Uκκθnκ2Kn1κ.

Proof.

Using Eq. (2), inequality ln y1y, for y0, and since

0κvKn1vgvdv0κnv2K2n2vg2vdv.
The result follows.

4. STOCHASTIC ORDERS BASED ON WEIGHTED ENTROPY

In this section, we explore the possibility of application of stochastic orders.

Definition 4.1.

Assume U10 and U20 be two random variables with density functions g1 and g2, distribution functions GU1 and GU2, reliability functions G¯U1=1GU1 and G¯U2=1GU2 , the weighted entropy functions ξg1w. and ξg2w., the convex residual entropy functions Hg1wU1,t and Hg2wU2,t and the weighted inaccuracy measures U1wU1 and U2wU2, respectively. We say that U1 is SE to U2 in the:

  • weighted entropy ordering (U1ξwU2) if ξg1wxξg2wx, for all x0.

  • weighted inaccuracy ordering (U1wU2) if U1wU1U2wU2.

  • convex residual entropy ordering (U1cwU2) if Hg1wU1,tHg2wU2,t.

  • less uncertainty ordering (U1UU2) if HU1HU2.

Definition 4.2.

Let U1 and U2 be two random variables, then U1 is said to be SE to U2 in the convex order (U1cxU2). If

EϕU1EϕU2,
This is true for any convex functions ϕ.

Definition 4.3.

The random variable U1 is said to be increasing hazard rate, IHR, if, and only if,

G¯U1u+v/G¯U1u is decreasing in u0, for all v0.

Next result discusses the closure under increasing linear transformation of ξw:

Theorem 4.1.

Suppose U1 and U2 are to be two random variables, let we define new functions as

V1=α1U1β1  and V2=α2U2β2, for all α1,α2+ and β1,β2+.

Let (i) U1ξwU2, (ii) α1α2, (iii) β1β2. Then V1ξwV2 if U1cxU2.

Proof.

Due to fact that φx=xln x is convex function, and when U1cxU2 we get that EU1=EU2, if we suppose that ξU1wu is decreasing in u, and let U1ξwU2, α1α2 and β1β2. By apply Eq. (7) and Proposition 2.3, we have V1ξwV2.

Corollary 4.1.

Suppose the relationship between two random variables U1 and U2 as follows:

U1 ξwU2.
Define V1=αU1β and V2=αU2β, α,β+. Then U1ξwU2 if U1cxU2.

In next theorem, we explain preservation properties and application of ξw, w and U between two exponential RV's if their scale parameters are ordered.

Theorem 4.2.

Let two absolutely continuous random variables U1 and U2 with density function

fix=αiexpαix, αi,x>0 and i=1,2.

  • If α1α2, then U1ξwU2.

  • If α1α2, then U1wU2.

  • If XwY then HU2UHU1.

Proof.

The result is obtained immediately from Remark 2.1.

Many studies explain the properties of repairable systems such as minimal repair. If the system has the virtual age Un1 immediately after the n1th repair, the functioning system obtained has the nth failure-time Yn distributed as

PrYny|Un1=u=Gy+uGuG¯u,
where Gy is the failure time distribution of a new system U0=0. Let nth repair cannot remove the damages incurred before the (n1)th repair and αn be the degree of the nth repair, now the time between n1th failure and nth failure reduce from Yn to αnYn. If αn=1 for all n1 then it agrees with a minimal repair model.

Suppose Vn=i=1nYin1 with V0=0 which represents the time elapsed since the system was put in operation, or the associated counting process t=supn1:Vn1t. Kijima [17] proved that tor Vn0 is a non-homogeneous Poisson process when αn=1 for all n1. Ebrahimi and Pellerey [1] defined the following definition:

Definition 4.4.

A point process t,t0 consisting of interarrival times Y1,Y2, is increasing (decreasing) in the

  1. convex residual entropy order if

    HwBi,tHwBj,t,for all tR+.

  2. weighted entropy order if

    ξwBi,tξwBj,t

and 1ijn, where fk is the conditional probability density function of Yk=VkVk1 for all k=1,2,, given Vk1=vk1,,V1=v1, and
Bk=st[Yk|Vk1=vk1,,V1=v1].

From Definition 5.3, we can note that if a point process is increasing (decreasing) means the uncertainty of the distribution is increasing (decreasing), i.e., the process is deterioration (improving).

Lemma 4.1.

Let Bk be as defined in Definition 5.3. Then, for k=1,2,,

  1. HwBk,t=HwX,t+vk1;

  2. ξwBk,t=ξwX,t+vk1.

Proof.

Refer to Ebrahimi and Pellery [1], Theorem 2.5.

Theorem 4.3.

The stochastic point process t=supn1:Vn1t consisting of the time of nth failure Vk1, k=1,2, generated by a minimal repair policy is increasing (decreasing) in

  1. convex residual entropy order if G. is IHR,

  2. weighted entropy order if ξwU is increasing for all u0.

Proof.

Similarly to lemma 5.1, we have HwBk+1,t=HwU,t+vk. In addition, by Theorem 3.1 in Di Crescenzo and Longobardi [4] we conclude that

HwU,t+vk=t+vk1ln λt+vk+t+vkλt+vkddtHU,t+vk+1G¯t+vkIt+vk,
where
It=tG¯uHU,udutG¯ulnG¯uG¯tdu,
and by Theorem 2.1 in Ebrahimi and Pellery [1] we have
HwU,t+vk=t+vkHU,t+vk+1G¯t+vkIt+vk.
By Theorem 2.5 in Ebrahimi and Pellery [1] we conclude that HU,t+vn1HU,t+vn. By using Eqs. (2022), and when a continuous distribution G. is IHR. It is obvious that
G¯t+vkG¯t+vk1,
and
It+vkIt+vk1.
Then, we get HwU,t+vkHwU,t+vk1. This complete the proof.

5. ENTROPY ESTIMATION

In this section, we introduce four the non-parametric estimators of Eq. (6) by using the same idea in Vasicek [9], Van Es [18], Ebrahimi et al. [19] and Al-Omari [20].

Let Z1,Z2,,Zn is a sequence of the random sample with the distribution G and let Z1,Z2,,Zn be the corresponding order statistics. Besides the sample distribution function Gnz=n1i=1n1Zkz,1kn. Then, the on-parametric estimators can be expressed as:

  1. Weighted Vasicek Entropy (Vξδ,nwZ): We can estimate of Eq. (8) by replacing Gwt by the empirical distribution Gnwt, and using a difference operator in place of the differential operator. Thus, Vξm,nwZ estimator of Eq. (8) can be represented as follows

    Vξδ,nwZ=n1i=1nlnnk=1nwZkj=1iwZj2δZi+δZiδ,
    where δ+ know as a window size, δ<n/2, Zs=Z1 if s<1 and Zs=Zn if s>n.

  2. Weighted δ-spacings Entropy (SEξδ,nwZ): Estimates of weighted entropy based on sample δ-spacings which introduced by Van Es [18], we can provide VEξδ,nwZ estimator of Eq. (6) as

    SEξδ,nwZ=n1i=1nδlnnj=1iZjk=1nZkδZi+δZiψδ+ln δ+EZ,
    where ψ. is the digamma function and ln δψδ corrected bias entropy estimator.

  3. Weighted Small weights Entropy (WSξδ,nwZ): As assign smaller weights in Vasicek [9], we obtain

    WSξδ,nwZ=n1i=1nlnnk=1nZkj=1iZjαiδZi+δZiδ,
    where
    αk={δ+k1δ,1kδ2,δ+1knδδ+nkδ,nδ+1kn.

  4. Modified Small weights Entropy (MSξδ,nwZ): As assign smaller weights in Ibrahimi et al. [21] we get

    MSξδ,nwZ=n1i=1nlnnk=1nwZkj=1iwZjβiδZi+δZiδ,
    where
    βi={32,1iδ,2,δ+1inδ,12,nδ+1in..

Example 5.1:

Let Z be random variable having exponential distribution with density function fZx=αexpαz, α,z>0 with parameter α>0. From Eq. (7) we obtain,

ϑwZ=1αψ2ln α,
where ψ. is Euler's psi function. By Example (2.1, a) in Di Crescenzo and Longobardi [4] we get,
HwZ=2ln αα and ξ1wZ=2ln αψ2.
Therefore,
ZwZ=41ln αψ2=4HZψ2.
Suppose α0,100. HwZ, ξwZ and ZwZ with weighted function wz=z is evaluated in Table 1.

α H^wZ ξ^wZ ^wZwZ
0.25 13.5452 2.9635 -9.9680
0.5 5.3863 2.2704 −7.1954
1 2 1.5772 −4.4228
5 0.0781 −0.0322 2.0150
10 −0.0303 −0.7254 4.7876
Table 1

Measures of weighted entropies of exponential distribution.

Now, a real data is illustrated to investigate the performance of suggested estimators.

Data Set: The following data set which is taken from Smith and Naylor [22], it represents the strength of 1.5 cm glass fibers measured at the National Physical Laboratory, England.

Data Set: 0.55, 0.93, 1.25, 1.36, 1.49, 1.52, 1.58, 1.61, 1.64, 1.68, 1.73, 1.81, 2.00, 0.74, 1.04, 1.27, 1.39, 1.49, 1.53, 1.59, 1.61, 1.66, 1.68, 1.76, 1.82, 2.01, 0.77, 1.11, 1.28, 1.42, 1.50, 1.54, 1.60, 1.62, 1.66, 1.69, 1.76, 1.84, 2.24, 0.81, 1.13, 1.29, 1.48, 1.50, 1.55, 1.61, 1.62, 1.66, 1.70, 1.77, 1.84, 0.84, 1.24, 1.30, 1.48, 1.51, 1.55, 1.61, 1.63, 1.67, 1.70, 1.78, 1.89

Shanker et al. [23] show the exponential density function (Exp0.663647) provided a better fit for this data. We compute the exact value of the weight entropy measure by the real data and compare this measure with Vξδ,nwZ, SEξδ,nwZ, WSξδ,nwZ and MSξδ,nwZ which shows in Table 2.

δ ξδ,nwZ Vξδ,nwZ SEξδ,nwZ WSξδ,nwZ MSξδ,nwZ
1 1.9872 1.210967749 1.244680937583 1.232972422 1.237538804
2 1.9872 1.200196229 1.231653171583 1.943017269 1.298445816
3 1.9872 1.245455104 1.462350850583 2.308545839 1.323498873
4 1.9872 1.279410563 1.650373401583 2.537544185 1.337842382
5 1.9872 1.282220574 1.820823663583 2.721711858 1.415075846
Table 2

Weighted entropy measure for exponential distribution (0.663647).

REFERENCES

1.N. Ebrahimi and F. Pellerey, J. App. Prob., Vol. 32, No. 1, 1995, pp. 202-211.
2.G. Asha and C.J. Rejeesh, Metron., Vol. 73, No. 1, 2015, pp. 119-134.
3.A.I. Khinchin, Mathematical Foundation of Information Theory, Dover Publications, New York, 1957.
4.Di Crescenzo and M. Longobardi, Sci. Math. Japn., Vol. 64, No. 2, 2006, pp. 679-690.
5.Di Crescenzo and M. Longobardi, J. Appl. Probab., Vol. 39, No. 2, 2002, pp. 434-440.
6.S. Kullback and R.A. Leibler, Ann. Math. Stat., Vol. 22, No. 1, 1951, pp. 79-86.
7.N. Ebrahimi and S.N.U.A. Kirmani, Ann. Inst. Stat. Math., Vol. 48, No. 2, 1996, pp. 257-265.
8.S. Das, Commun. Stat. Theory Methods., Vol. 46, No. 12, 2017, pp. 5707-5727.
9.O. Vasicek, J. R. Stat. Soc. Ser. B., Vol. 38, No. 1, 1976, pp. 54-59.
10.I.S. Gradshteyn and I.M. Ryzhik, Table of Integrals, Series and Products, fifth, Academic Press, New York, 1994.
11.I. Bairamov, M. Ahsanullah, and I. Akhundov, J. Stat. Theory Appl., Vol. 1, No. 2, 2002, pp. 119-132.
12.M. Asadi and I. Bayramoglu, IEEE. Trans. Reliab., Vol. 55, No. 2, 2006, pp. 314-318.
13.M. Tavangar and M. Asadi, Metrika., Vol. 72, No. 1, 2010, pp. 59-73.
14.M. Asadi and I. Bayramoglu, Commun. Stat. Theory Methods., Vol. 34, No. 2, 2005, pp. 475-485.
15.M. Asadi, J. Stat. Plan. Infer., Vol. 136, No. 4, 2006, pp. 1197-1206.
16.R.C. Gupta, H.C. Taneja, and R. Thapliyal, Stat. Theory Appl., Vol. 13, No. 1, 2014, pp. 27-37.
17.M. Kijima, J. Appl. Probab., Vol. 26, No. 1, 1989, pp. 89-102.
18.B. Van Es, Scand. J. Stat., Vol. 19, No. 1, 1992, pp. 61-72.
19.N. Ebrahimi, K. Pflughoeft, and E. Soofi, Stat. Probab. Lett., Vol. 20, No. 3, 1994, pp. 225-234.
20.A.I. Al-Omari, J. Comput. Appl. Math., Vol. 261, No. 1, 2014, pp. 95-102.
21.Ibrahimi et al., 1994.
22.R.L. Smith and J.C. Naylor, J. R. Stat. Soc., Vol. 36, No. 3, 1987, pp. 358-369.
23.R. Shanker, H. Fesshaye, and S. Selvaraj, Biom. Biostat. Int. J., Vol. 2, No. 5, 2015, pp. 1-9.
Journal
Journal of Statistical Theory and Applications
Volume-Issue
17 - 4
Pages
703 - 718
Publication Date
2018/12/31
ISSN (Online)
2214-1766
ISSN (Print)
1538-7887
DOI
10.2991/jsta.2018.17.4.11How to use a DOI?
Copyright
© 2018 The Authors. Published by Atlantis Press SARL.
Open Access
This is an open access article under the CC BY-NC license (http://creativecommons.org/licences/by-nc/4.0/).

Cite this article

TY  - JOUR
AU  - M. Ramadan
PY  - 2018
DA  - 2018/12/31
TI  - Weighted Entropy Measure: A New Measure of Information with ts Properties in Reliability Theory and Stochastic Orders
JO  - Journal of Statistical Theory and Applications
SP  - 703
EP  - 718
VL  - 17
IS  - 4
SN  - 2214-1766
UR  - https://doi.org/10.2991/jsta.2018.17.4.11
DO  - 10.2991/jsta.2018.17.4.11
ID  - Ramadan2018
ER  -