# Journal of Statistical Theory and Applications

Volume 18, Issue 1, March 2019, Pages 65 - 78

# Statistical Inference for Topp–Leone-generated Family of Distributions Based on Records

Authors
Department of Statistics and Operations Research, Aligarh Muslim Univeristy, Aligarh, Uttar Pradesh, India
Corresponding Author
Received 13 November 2017, Accepted 6 August 2018, Available Online 22 April 2019.
DOI
10.2991/jsta.d.190306.008How to use a DOI?
Keywords
Topp–Leone-generated family of distributions; ML estimators; Bayesian inference; symmetric loss function; asymmetric loss function; Bayes estimators; reliability function
Abstract

In this paper, we consider a general family of distributions generated by Topp–Leone distribution (known as TL family of distributions) proposed by Rezaei et al. [1]. We consider the problem of estimation of the shape parameter, scale parameter, and reliability function based on record data from TL family of distributions. We derive the maximum likelihood estimator (MLE) for shape parameter, scale parameter, and reliability function. We have also obtained UMVUE (uniformly minimum-variance unbiased estimator) for reliability function when scale parameter is known. A Bayesian study is carried out under symmetric and asymmetric loss functions in order to find the Bayes estimators for unknown parameters and reliability function. Further, we have predicted future record values using Bayesian approach. A numerical comparison of various estimators is also reported.

Open Access

## 1. INTRODUCTION

Topp–Leone (TL) distribution was first introduced by Topp and Leone [2]. Later Nadarajah and Kotz [3] discussed this distribution elaborately by obtaining explicit algebraic expressions such as hazard rate function and nth moment, and so on. The density and distribution function (df) of TL distribution is given by

fx=2α1xx2xα1,  α>0,0<x<1,
Fx=x2xα,  α>0,0<x<1.

Since its emergence, many authors have studied different properties of TL distribution. We mention reliability measures and stochastic orderings Ghitany et al. [4]; distributions of sums, products, and ratios Zhou et al. [5]; behavior of kurtosis Kotz and Seier [6]; record values Zghoul [7]; moments of order statistics Genc [8].

Though probability distributions are very useful in practical problems but in some situations the available distributions do not support our problem appropriately. Then it becomes necessary to either define a new distribution or modify some existing distributions, so that they can be useful for various practical problems. This modification of probability distribution gives boost to generalization of distribution. From the last couple of years, we see that several authors have proposed various generated family of distributions. TL distribution is very useful and widely applicable distribution. But due to fact that it has only one parameter and its support is restricted to 0,1, it is not flexible. It cannot be used for lifetime modeling. So, the generalization of TL distribution is needed. One of the generalization of TL distribution is discussed by Al-Shomrani et al. [9]. They have considered G(x) as the baseline df in TL distribution and obtained moments and hazard rate of the new TL family of distributions. Rezaei et al. [1] also generalized this TL distribution by using G(x)θ as the baseline distribution called TL-generated (TLG) family of distributions. The authors have explained some special cases of this distribution and also derived expressions of maximum likelihood estimators (MLEs) for unknown parameters.

In this paper, we consider TLG family of distributions proposed by Rezaei et al. [1]. For this generated family of distribution, we consider baseline distribution G(x/θ) where θ denotes an unknown scale parameter. Several well-known distributions can be used for the baseline distribution, for example, exponential distribution with df G(x/θ)=1ex/θ,θ>0,x>0, Rayleigh distribution with df Gx/θ=1ex2/2θ2,θ>0,x>0, and so on.

The density and df of TLG family of distributions is given by

fx,α,θ=2αθgx/θ1Gx/θGx/θ2Gx/θα1,α>0,θ>0x,Fx;α,θ=Gx/θ2Gx/θα,α>0,θ>0,x.

Nowadays several researchers are interested in study of record data (extreme values) because of its application in various fields, such as in sports, the longest winning streak of a team, the highest runs of a player, lowest run given by a bowler in an over. In field of marketing; lowest stock market figure, minimum cost of a certain product in market. Medical sciences; most number of people affected by a disease at a particular place, and so on. In all these field of research, record data is widely used.

Chandler [10] introduced the idea of record values and studied some of its basic properties. After that many authors have worked in this field and gave their valuable inputs. For excellent understanding of records, one may refer to books written by Ahsanullah [11], Ahsanullah [12], and Arnold [13]. For application of record values in various disciplines, one may refer to Minimol and Thomas [14], Ahsanullah [15], itekhan2016umvu, Bdair and Raqab [16], MirMostafaee et al. [17], Ahsanullah and Nevzorov [18], Arshad and Jamal [19], Arshad and Baklizi [20], Anwar [21], and Arshad and Jamal [22]. Now we discuss the mathematical definition of records and its distribution.

### Definition 1.1

Let Xi:i1 be a sequence of independent and identically distributed (iid) random variables with an absolutely continuous df Fx and probability density function (pdf) fx. An observation Xj is called a lower record if its value precedes all previous observations, that is, Xj is a lower record if Xj<Xi for every j>i. Let R1,R2,,Rn be n lower records and let r1,r2,,rn denote the observed values of R1,R2,,Rn, respectively. The density of nth record is given by

fRnrn=1n1!lnFrnn1frn,<rn<<r2<r1<.

The joint density of pth and qth lower record is given by p<q

fRp,Rq(rp,rq)=(ln[F(rp)])(p1)(p1)!(ln[F(rp)]ln[F(rq)])(qp1)(qp1)!f(rp)f(rq)F(rp),                                                          <rn<<rq<rp<r2<r1<.

The joint density of R_=R1,R2,,Rn is given by

fR_r1,r2,,rn=i=1n1friFrifrn,<rn<<r2<r1<.

The remainder of the paper is as follows. In Section 2, we derive the expressions for finding out the MLEs for the unknown parameters. An example for finding out MLE is also provided. In Section 3, uniformly minimum-variance unbiased estimator (UMVUE) of the reliability function is derived when the scale parameter is known. In Section 4, a Bayesian study is carried out for obtaining the Bayes estimators for scale parameter, shape parameter, and reliability function under symmetric (squared error) and asymmetric (LINEX and entropy) loss functions. In Section 5, we provide Bayesian prediction interval for future records. Finally, in Section 6, a numerical study is provided to illustrate the results.

## 2. MAXIMUM LIKELIHOOD ESTIMATION

The likelihood function based on the lower records observed form TLG family of distributions is given by

Lα,θ;r_=2αθni=1n1gri/θ1Gri/θGri/θ2Gri/θGrn/θ2Grn/θα.

Now taking log both sides, we get

ln Lα,θ;r_=nln 2αn ln θ+i=1n1 ln gri/θ1Gri/θGri/θ2Gri/θ+α ln Grn/θ2Grnθ.

Differentiating Eq. (3) with respect to α, we get

ln Lα,θ;r_α=nα+ ln Grn/θ2Grn/θ.

In order to find MLE of α, we will equate the above equation to 0 and we have

α ln Grn/θ2Grn/θ+n=0.

Similarly, differentiating Eq. (3) with respect to θ and equating to 0, we get

nθ2αrnθ2grn/θ1Grn/θGrn/θ2Grn/θi=1nriθ2gri/θgri/θGri/θg2ri/θgri/θ1Gri/θ            +i=1n2riθ2gri/θ1Gri/θGri/θ2Gri/θ=0.

The MLE α^,θ^ of α,θ is a solution of the Eqs. (4) and (5). Because of the nonlinear nature of these equations, it is very cumbersome to obtain the numerical values of unknown parameters explicitly. So, we will use numerical computation techniques to obtain the MLEs for both the parameters and the reliability function, based on lower records obtained from TLG family of distributions. The corresponding MLE of the reliability function Rt is obtained, after replacing α and θ, respectively, by their MLEs α^ and θ^, obtained after solving Eqs. (4) and (5), that is, the MLE of the reliability function Rt is given by

R^t=1Gt/θ^2Gt/θ^α^.

### Example 2.1.

Let TLG family of distributions has baseline distribution as exponential distribution with df

Gx/θ=1ex/θ,    x>0,θ>0.

Therefore, X has Topp-Leone exponential (TL-Exp) distribution. From Eqs. (4) and (5), we have

α ln 1ern/θ+n=0,
nθ+nθ2rne2rn/θ1e2rn/θln1e2rn/θ+i=1nriθ3i=1nrie2ri/θθ31e2ri/θ=0.

For MLE of θ, Eq. (7) has to be solved, then MLE of α can be obtained from Eq. (6), after putting value of θ^ obtained from Eq. (7). The numerical computation of the MLE of α and θ and Rt is illustrated in Section 6 (see Example 6.1).

## 3. UMVUE OF RELIABILITY FUNCTION

In this section, we derive the UMVUE of Rt when the scale parameter θ is known (WLOG, assume θ=1). For this, we need the following lemma. The proof of lemma is straightforward and is omitted. This lemma can be obtained from the Lemma 3.1 of Khan and Arshad [23].

### Lemma 3.1

Let R1,R2,Rn be the first n lower records having joint pdf given in Eq. (2). Define Z=GRn2GRn. Then, for z0,1, the conditional distribution of R1 given Z=z

fR1/Zr1/z=2n1 ln z1 ln Gr12Gr1 ln zn2gr11Gr1Gr12Gr1,if G111z<r1<  0,otherwise.

Now we shall derive the UMVUE of Rt=1Ft. Since Z is a complete sufficient statistic for α, it follows from the Lehmann–Scheffe´ theorem that the UMVUE of Rt can be obtained as

t=EJR1,t|Z=z,
where
JR1,t=1,if R1>t0,if R1t.

Using Lemma 3.1, we have

t=tfR1/Zr1/zdr1=maxt,G111z2n1 ln z1 ln Gr12Gr1 ln zn2gr11Gr1Gr12Gr1dr1=1max0,1 ln Gt2Gt ln zn1.

The UMVUE of Rt is

t=11 ln Gt2Gt ln zn1,if z<Gt2Gt1,if zGt2Gt.

Example 2.1 continued The UMVUE of Rt for the TL-Exp distribution is

t=11 ln 1e2t ln 1e2rnn1,if t>rn1,if t<rn.

## 4. BAYESIAN ESTIMATION

In this section, we consider the problem of estimation under Bayesian view point. For this, we consider one symmetric and two asymmetric loss functions. Under these loss functions, Bayes estimators for both the parameters and reliability function are obtained. Squared error loss function is taken as symmetric loss function, it gives equal weight to overestimation as well as underestimation. For asymmetric loss function, linear exponential (LINEX) loss function is used, which was proposed by Varian [24] (also see Zellner [25]) and entropy loss function is also taken, which was proposed by James and Stein [26].

In TLG family of distributions, it is not possible to find a mathematically tractable continuous joint prior distribution for both unknown parameters α and θ. To choose a joint prior distribution for α,θ that incorporate uncertainty about both unknown parameters, we adopt the method proposed by Soland [27]. This method is also used by several researchers (see Asgharzadeh and Fallah [28]).

Assume that the scale parameter θ is restricted to a finite number of values θ1,θ2,,θk with prior probabilities p1,p2,,pk, respectively, that is, the prior distribution for θ is given by

πθj=Pθ=θj=pj,    j=1,2,,k.

Further, we are assuming that the conditional prior distribution for α given θ=θj has gamma distribution with parameters aj and bj, that is,

πα|θj=bjajαaj1eαbjΓaj,    α>0,aj>0,bj>0.

The joint density of records R_=R1,R2,,Rn is given by

fR_r_|α,θ=2αθni=1ngri/θ1Gri/θGri/θ2Gri/θGrn/θ2Grn/θα, r_=r1,r2,,rn.

Using Eqs. (10) and (11), we get the conditional posterior density of α given θ=θj as

πα|θj;r_=πα|θjfR_r_|α,θj0πα|θjfR_r_|α,θjdα=αn+aj1eαIrn,θj0αn+aj1eαIrn,θjdα=Irn,θjn+ajΓn+ajαn+aj1eαIrn,θj,    α>0,
where
Irn,θj=bj ln Grn/θj2Grn/θj.

The joint prior distribution can be obtained by multiplying πα|θj and πθj,j=1,2,,k. Then the joint posterior distribution for α,θ is given by

πα,θj|r_=fR_r_|α,θjπα|θjπθj0j=1kfR_r_|α,θjπα|θjπθjdα.

Now, we will first solve the denominator integral of above equation, that is,

I=0j=1kfR_(r_|α,θj)π(α|θj)π(θj)dα=j=1k0bjajαaj1eαbjΓ(aj)pj2αθjθi=1n(g(ri/θj)[1G(ri/θj)]G(ri/θj)[2G(ri/θj)])(G(rn/θj)[2G(rn/θj)])αdα=j=1kpjbjajΓ(aj)2θjni=1n(g(ri/θj)[1G(ri/θj)]G(ri/θj)[2G(ri/θj)])             ×0αn+aj1eα[bj ln [G(rn/θj)][2G(rn/θj)]]dα=j=1kpjbjajΓ(aj)2θjni=1n(g(ri/θj)[1G(ri/θj)]G(ri/θj)[2G(ri/θj)])Γ(n+aj)[bj ln [G(rn/θj)(2G(rn/θj))]](n+aj)

Using Eqs. (13) and (14), the joint posterior density is

πα,θj|r_=pjmjbjajαn+aj1θjnΓajGeαIrn,θj,    α>0,j=1,2,,n.
where
G=j=1kmjpjbjajθjnΓajΓn+ajIrn,θjn+aj and mj=i=1ngri/θj1Gri/θjGri/θj2Gri/θj,  j=1,2,3,,k.

The marginal posterior density of θj is

Pj=0πα,θj|r_dαPj=0pjmjbjajαn+aj1ΓajGeαIrn,θjdαPj=Γn+ajIrn,θjn+ajpjmjbjajθjnΓajG,    j=1,2,3,,k.

Now we will derive the Bayes estimators of unknown quantities under various loss functions.

## 4.1. Squared Error Loss Function

The squared error loss function is defined as

Lδ,λ=δλ2,    δD,  λΘ,
where D is decision space and Θ is the parameter space. Clearly, the Bayes estimator under squared error loss function is the posterior mean, then the Bayes estimator for α is
αBS=0αj=1kPjπα|θj;r_dα=j=1kn+ajPjIrn,θj.

Similarly, the Bayes estimator for θ is given by

θBS=0j=1kθjPjπα|θj;r_dα=j=1kPjθj.

The Bayes estimator for reliability function Rt is

RtBS=0j=1k1Ft;α,θjPjπα|θj;r_dα=j=1kPj11 ln Gt/θj2Gt/θjIrn,θjn+aj.

## 4.2. Entropy Loss Function

The entropy loss function is given by

Lδ,λ=δ/λ ln δ/λ1,    δD,  λΘ.

The Bayes estimator under this loss function is

δBE=Eδ11.

So, Bayes estimator of α is

αBE=Eα11.
E1α=0πα|θj;r_πθj|r_αdα=j=1kPj01αIrn,θjn+ajΓn+ajαn+aj1eαIrn,θjdα=j=1kPjIrn,θjn+ajΓn+ajΓn+aj1Irn,θjn+aj1=j=1kPjIrn,θjn+aj1.

Hence

αBE=1j=1kPjIrn,θjn+aj1.

Similarly, the Bayes estimator for θ is

θBE=0j=1kPj1θjπα|θj;r_dα1=j=1kPjθj1.

The Bayes estimator of reliability function is given by

RtBE=j=1kPj01Ft;α,θjπα|θj;r_dα1.

Using the binomial expansion in the above expression, we have

R(t)BE=j=1kPj0s=0Ft;α,θjsπ(α|θj;r_)dα1=j=1ks=0Pj1s ln Gt/θj2Gt/θjIrn,θjn+aj1.

## 4.3. LINEX Loss

The LINEX loss function is

Lδ,λ=ecδλcδλ1c0,δD,λΘ,
where c0 is the parameter of loss function. The Bayes estimator under this loss function is
δBE=1c ln Eecδ,     c0.

The Bayes estimator for α is

α^BL=1c ln j=1kPj0ecαIrn,θjn+ajΓn+ajαn+aj1eαIrn,θjdα=1c ln j=1kPj1+cIrn,θjn+aj.

The Bayes estimator for θ is

θBL=1c ln E(ecθ)=1c ln [j=1kPjecθj].

The Bayes estimator of reliability function is given by

RtBL=1c ln EecRt.

To solve this, we will use exponential series expansion

ecRt=ec1Ft;α,θj=ececFt;α,θj=ecs=0csFt;α,θjss!.

Hence

RtBL=1c ln ecj=1ks=0csPjs!1s ln Gt/θj2Gt/θjIrn,θjn+aj.

## 5. PREDICTION INTERVAL

In this section, we will predict the future lower record Rs while already having R1,R2,,Rn for n<s. For this problem, we will use Bayesian procedure and Markovian property of record statistics. The conditional distribution of Rs given Rn is obtained by using Markovian property (see Arnold et al. [13]).

fRs/Rnrs/rn;α,θ=HrsHrnsn1Γsnfrs;α,θFrn;α,θ,    <rs<rn<,
where H= ln F. For TLG family of distribution, with pdf given by Eq. (1), the function fRs/Rnrs/rn;α,θ is given by
fRs/Rnrs/rn;α,θ=2θαsnΓsn ln Grn/θ2Grn/θGrs/θ2Grs/θsn1    ×grs/θ1Grs/θGrs/0a2Grs/θGrs/θ2Grs/θGrn/θ2Grn/θα,<rs<rn<.

The Bayes predictive density function of Rs given Rn=rn is given by

frs/rn=αfRs/Rnrs/rn;α,θj=1kPjπα|θj;r_dα.

Using Eq. (12) in Eq. (16), we get

frs/rn=j=1kPj02θjαsnΓsn ln Grn/θj2Grn/θjGrs/θj2Grs/θjsn1×grs/θj1Grs/θjGrs/θj2Grs/θjGrs/θj2Grs/θjGrn/θj2Grn/θjα×Irn,θjn+ajΓn+ajαn+aj1eαIrn,θjdrs=j=1k2PjθjBn+aj,sn ln Grn/θj2Grn/θjGrs/θj2Grs/θjsn1×grs/θj1Grs/θjGrs/θj2Grs/θjIrn,θjn+ajIrs,θjs+aj,   <rs<rn<,
where Ba,b is the complete beta function. Now we will find the lower and upper 100(1α)% prediction bounds for Rs. First, we will find the predictive survival function PRsd|rn for some positive constant d
PRsd|rn=drnfrs/rndrs=j=1kPj1IBn+aj,sn,χBn+aj,sn,
where χ=Irn,θjId,θj, and IBn+aj,sn,χ is the incomplete beta function defined by
IBa,b,χ=0χua11ub1du.

Let Lrn and Urn be two constants such that

PRs>Lrn|rn=1α2    and    PRs>Urn|rn=α2.

Using Eq. (17), we obtain two-sided 1001α% predictive bounds for Rs as Lrn,Urn, that is,

PLrn<Rs<Urn=1α.

We are considering here a special case when s=n+1, which is of our interest practically because after getting n records we want the next record n+1. The predictive survival function of Rn+1 is given as

PRn+1d|rn=j=1kPj1Irn,θjId,θjn+aj.

Here we are assuming the case when the scale parameter is known (WLOG, θ=1). For this case, predictive survival function can be written as

PRn+1d|rn=1b ln Grn2Grnb ln Gd2Gdn+a.

From Eqs. (17) and (18) we have lower and upper limits as

Lrn=G11±1expbb ln Grn2Grnα21/n+aUrn=G11±1expbb ln Grn2Grn1α21/n+a

## 6. NUMERICAL COMPUTATIONS

In this section, a simulation study is conducted to illustrate all the estimation and prediction methods described in the preceding sections. We consider exponential distribution with df

Gx;θ=1ex/θ,    x>0, θ>0,
as a special case for the baseline df in the model (1), named TL-Exp distribution.

### Example 6.1.

We generate lower records of size n=9 from TL-Exp distribution for α=5 and θ=3. The lower record values are

7.0752,4.6823,4.0686,3.9577,3.3374,1.6600,1.5436,1.1236,0.6410.

The MLE for α and θ are 4.014078 and 2.885522, respectively, obtained by solving nonlinear Eqs. (6) and (7), in R software by Newton–Raphson method. Using these estimates we get the MLE of reliability function at t=0.5 as R^0.5=0.9928 and t=1 as R^1=0.9381. Here we assume that scale parameter θ, takes finite values as 2.00.12.9, with equal probability 0.1 for each θj,j=1,2,,10.

For obtaining the Bayes estimators for different parameters, first it is necessary to obtain the hyper-parameters aj,bj for each θj,j=1,2,,10. The hyper parameters aj,bj can be obtained based on the expected value of the reliability function Rt conditional on θ=θj, using

Eα|θj[Rt|θ=θj]=011e2t/θjαbjajαaj1eαbjΓajdα=11 ln  1e2t/θjbjaj.

For the two values of Rt1,t1 and Rt2,t2, the values of aj and bj for each value of θ can be obtained numerically from Eq. (19). A nonparametric approach R~ti=Ri=ni+0.625/n+0.25,  i=1,2,3,n can be used to estimate any two different values of the reliability function Rt1 and Rt2 (see Martz and Waller [29]). In this case, we use R~5.9577=0.6081081 and R~1.1236=0.1756757. These two values are substituted into Eq. (19), where aj and bj are solved numerically for each θj, j=1,2,,10, using the Newton–Raphson method. After that, posterior probabilities are calculated for each θj, and presented in Table 1. The MLEs, Bayes estimators, and reliability function (for different t=0.5,1,1.5) are also calculated and presented in Tables 2 and 3.

j 1 2 3 4 5
θ 2 2.1 2.2 2.3 2.4
p 0.1 0.1 0.1 0.1 0.1
a 5.273678 5.069663 4.893300 4.739474 4.604236
b 1.0022315 0.9941453 0.9855453 0.9765224 0.9671572
u 1.347749e−14 3.602815e−14 8.744126e−14 1.952425e−13 4.054791e−13
P 0.07127739 0.08399823 0.09474197 0.10297468 0.10845833
j  6 7 8 9 10
θ 2.5 2.6 2.7 2.8 2.9
p 0.1 0.1 0.1 0.1 0.1
a 4.484489 4.377778 4.282129 4.195944 4.117911
b 0.9575209 0.9476757 0.9376761 0.9275691 0.9173958
u 7.904341e−13 1.457474e−12 2.558572e−12 4.299937e−12 6.951137e−12
P 0.11121071 0.11144265 0.10949040 0.10575451 0.10065113
Table 1

Prior information and posterior probabilities.

()ML ()BS ()BE ()BL
c=0.5 c=0.5 c=1 c=1.5 c=2
α 4.014078 5.307494 4.900998 5.937348 4.831392 4.454377 4.145928 3.887385
θ 2.885522 2.475928 2.444745 2.494692 2.456996 2.438097 2.419431 2.401179

ML, maximum likelihood estimator; BS, Bayes estimator under squared error loss; BE, Bayes estimator under entropy loss; BL, Bayes estimator under LINEX loss.

Table 2

Estimates of α and θ.

t ()UMVUE ()ML ()BS ()BE ()BL
c=0.5 c=0.5 c=1 c=1.5 c=2
0.5 0.9902 0.9928 0.9836 0.9830 0.9838 0.9835 0.9834 0.9832 0.9831
1 0.6737 0.9381 0.9064 0.9007 0.9075 0.9052 0.9041 0.9028 0.9016
1.5 0.3092 0.8265 0.7699 0.7543 0.7725 0.7672 0.7645 0.7617 0.7589

UMVUE, uniformly minimum-variance unbiased estimator; ML, maximum likelihood estimator; BS, Bayes estimator under squared error loss; BE, Bayes estimator under entropy loss; BL, Bayes estimator under LINEX loss.

Table 3

Estimates of reliability for different t.

Using the prediction procedure described in Section 5, the 95% prediction interval for the next lower record R8 is 0.10889,0.2756101.

The mean squared error (MSEs) and risks of estimators and reliability function are compared according to following steps:

1. Samples of lower records with different size of n6,7,8 are generated from the TL-Exp distribution for θ=3 and different α1.5,1.8,2.

2. The values of aj and bj for a given value of θj,j=1,2,3,,10 are obtained using the procedure discussed.

3. Estimates of α, θ and Rt are obtained.

4. Above steps are repeated 10,000 times to evaluate the MSEs of these estimates and also estimated risks are compared under different loss functions using

ERδ=1mi=1mLδi,λ.

All these the result are presented in Tables 4– to 8.

From Tables 4 through 6, we observe that Bayes estimates for asymmetric loss functions are performing better than Bayes estimates for symmetric loss function and MLEs. UMVUE of reliability function is better than Bayes estimates of reliability and MLE. From Tables 7 and 8, comparison of risk for Bayes estimates of α, θ and reliability function can be seen, and it is clear that, estimators for asymmetric loss function are again performing better than estimators for symmetric loss function.

n α MSEα^ MSEθ^ MSERt^ MSEt
1.5 0.6073 7.4651 0.6497 0.0199
6 1.8 0.5958 7.4469 0.6484 0.0199
2 2.1409 7.7985 0.7996 0.005
1.5 0.4598 7.3734 0.637 0.0199
7 1.8 1.1648 7.5737 0.7389 0.0088
2 1.8346 7.6939 0.7917 0.005
1.5 0.3375 7.2909 0.6245 0.0199
8 1.8 1.0045 7.5234 0.7311 0.0088
2 1.6336 7.6652 0.7856 0.005

MLE, maximum likelihood estimation; UMVUE, uniformly minimum-variance unbiased estimator; MSE, mean squared error.

Table 4

MSE of the MLEs and UMVUE for (θ,t)=(3,0.5).

α,θ=1.5,3
n MSE(α)BS MSE(α)BE MSE(α)BL
c=0.5 c=0.5 c=1 c=1.5 c=2
6 0.1873 0.1981 0.1845 0.1901 0.193 0.1958 0.1986
7 0.1872 0.198 0.1844 0.1901 0.1929 0.1957 0.1985
8 0.1872 0.1979 0.1843 0.19 0.1928 0.1956 0.1985
n MSE(θ)BS MSE(θ)BE MSE(θ)BL
c=0.5 c=0.5 c=1 c=1.5 c=2
6 0.5442 0.5825 0.5205 0.5666 0.5878 0.6077 0.6263
7 0.5733 0.6098 0.5507 0.5947 0.6147 0.6334 0.6508
8 0.6007 0.6354 0.5791 0.6209 0.6398 0.6573 0.6736
α,θ=1.8,3
n MSE(α)BS MSE(α)BE MSE(α)BL
c=0.5 c=0.5 c=1 c=1.5 c=2
6 0.537 0.5551 0.5322 0.5417 0.5465 0.5513 0.556
7 0.5368 0.5549 0.532 0.5416 0.5464 0.5511 0.5559
8 0.5371 0.5547 0.5323 0.5413 0.5462 0.5509 0.5557
n MSE(θ)BS MSE(θ)BE MSE(θ)BL
c=0.5 c=0.5 c=1 c=1.5 c=2
6 0.5516 0.5897 0.5281 0.5739 0.595 0.6146 0.633
7 0.5811 0.6173 0.5586 0.6023 0.6221 0.6405 0.6577
8 0.6098 0.6434 0.5886 0.6293 0.6479 0.6652 0.6811
α,θ=2,3
n MSE(α)BS MSE(α)BE MSE(α)BL
c=0.5 c=0.5 c=1 c=1.5 c=2
6 0.8701 0.8931 0.8639 0.8761 0.8822 0.8882 0.8943
7 0.8709 0.8928 0.864 0.8758 0.8819 0.888 0.894
8 0.8726 0.8918 0.8675 0.8746 0.8809 0.8872 0.8933
n MSE(θ)BS MSE(θ)BE MSE(θ)BL
c=0.5 c=0.5 c=1 c=1.5 c=2
6 0.554 0.5919 0.5305 0.5762 0.5972 0.6168 0.635
7 0.5877 0.6209 0.5629 0.606 0.6257 0.644 0.661
8 0.6218 0.6482 0.5975 0.6332 0.6518 0.669 0.6849

MSE, mean squared error.

Table 5

MSEs of the Bayes estimates of α and θ.

α,θ,t=1.5,3,0.5
n MSE(Rt)BS MSE(Rt)BE MSE(Rt)BL
c=0.5 c=0.5 c=1 c=1.5 c=2
6 0.0352 1.1539 0.035 0.0354 0.0357 0.0359 0.0361
7 0.036 1.1162 0.0358 0.0363 0.0365 0.0367 0.0369
8 0.0367 1.0875 0.0365 0.0369 0.0371 0.0374 0.0376
α,θ,t=1.8,3,0.5
6 0.0554 1.0537 0.0551 0.0556 0.0559 0.0562 0.0565
7 0.0566 1.0107 0.0563 0.0568 0.0571 0.0574 0.0577
8 0.0576 0.9769 0.0573 0.0578 0.0581 0.0584 0.0586
α,θ,t=2,3,0.5
6 0.0669 1.0006 0.0666 0.0672 0.0675 0.0678 0.0681
7 0.0682 0.9591 0.0679 0.0685 0.0688 0.0691 0.0694
8 0.07 0.9341 0.0723 0.0709 0.07 0.0701 0.0704
α,θ,t=1.5,3,1
6 0.0532 0.0059 0.0529 0.0535 0.0538 0.0541 0.0544
7 0.0545 0.0048 0.0542 0.0548 0.0551 0.0554 0.0557
8 0.0559 0.0038 0.0556 0.0562 0.0565 0.0567 0.057
α,θ,t=1.8,3,1
6 0.0883 0.0003 0.0879 0.0887 0.0891 0.0895 0.0898
7 0.0902 0.0002 0.0898 0.0906 0.0909 0.0913 0.0917
8 0.0919 0.0003 0.0918 0.0923 0.0926 0.093 0.0933
α,θ,t=2,3,1
6 0.1116 0.0011 0.1112 0.1121 0.1125 0.113 0.1134
7 0.1139 0.0019 0.1141 0.1146 0.1147 0.1151 0.1155
8 0.1161 0.0025 0.1168 0.1163 0.1165 0.1169 0.1173

MSE, mean squared error.

Table 6

MSEs of the estimates of Rt.

α,θ=1.5,3
n ERαBS ERαBE ERαBL
c=0.5 c=0.5 c=1 c=1.5 c=2
6 0.1873 0.0553 0.0248 0.0221 0.0838 0.1787 0.3015
7 0.1872 0.0552 0.0248 0.0221 0.0838 0.1786 0.3013
8 0.1872 0.0552 0.0248 0.0221 0.0837 0.1785 0.3012
n ERθBS ERθBE ERθBL
c=0.5 c=0.5 c=1 c=1.5 c=2
6 0.5438 0.8625 0.0737 0.0627 0.2313 0.4799 0.7881
7 0.5738 0.8625 0.0783 0.0657 0.2406 0.4969 0.8126
8 0.6006 0.8625 0.0825 0.0683 0.2491 0.5121 0.8345
α,θ=1.8,3
n ERαBS ERαBE ERαBL
c=0.5 c=0.5 c=1 c=1.5 c=2
6 0.537 0.1204 0.0754 0.0601 0.2167 0.4421 0.7164
7 0.5368 0.1203 0.0754 0.0601 0.2167 0.442 0.7162
8 0.537 0.1203 0.0754 0.0601 0.2166 0.4418 0.7161
n ERθBS ERθBE ERθBL
c=0.5 c=0.5 c=1 c=1.5 c=2
6 0.5508 0.8625 0.0747 0.0634 0.2334 0.4837 0.7937
7 0.5812 0.8625 0.0795 0.0664 0.2431 0.5014 0.8191
8 0.6113 0.8625 0.0841 0.0693 0.2522 0.5177 0.8427
α,θ=2,3
n ERαBS ERαBE ERαBL
c=0.5 c=0.5 c=1 c=1.5 c=2
6 0.8701 0.1671 0.1269 0.0943 0.3302 0.6569 1.0422
7 0.87 0.167 0.1269 0.0942 0.3301 0.6568 1.0419
8 0.873 0.1669 0.1286 0.0942 0.3296 0.656 1.041
n ERθBS ERθBE ERθBL
c=0.5 c=0.5 c=1 c=1.5 c=2
6 0.554 0.8625 0.0752 0.0637 0.2344 0.4857 0.7966
7 0.5854 0.8625 0.0801 0.0668 0.2443 0.5035 0.8223
8 0.6224 0.8625 0.0866 0.0697 0.253 0.5194 0.8451

ER, estimated risk.

Table 7

Estimated risks for Bayes estimates of α and θ.

α,θ,t=1.5,3,0.5
n ER(Rt)BS ER(Rt)BE ER(Rt)BL
c=0.5 c=0.5 c=1 c=1.5 c=2
6 0.0351 0.4487 0.0045 0.0043 0.0172 0.0381 0.0662
7 0.036 0.4364 0.0046 0.0044 0.0176 0.0389 0.0675
8 0.0367 0.4263 0.0047 0.0045 0.0179 0.0395 0.0687
α,θ,t=1.8,3,0.5
6 0.0554 0.3808 0.0072 0.0067 0.0264 0.0576 0.0993
7 0.0565 0.3692 0.0073 0.0068 0.0269 0.0587 0.101
8 0.0576 0.3592 0.0076 0.007 0.0279 0.0614 0.1074
α,θ,t=2,3,0.5
6 0.0554 0.3808 0.0072 0.0067 0.0264 0.0576 0.0993
7 0.0565 0.3692 0.0073 0.0068 0.0269 0.0587 0.101
8 0.0576 0.3592 0.0076 0.007 0.0279 0.0614 0.1074
α,θ,t=1.5,3,1
6 0.0531 0.0063 0.0069 0.0064 0.0254 0.0557 0.0961
7 0.0545 0.0051 0.007 0.0066 0.026 0.057 0.0982
8 0.0558 0.0041 0.0072 0.0067 0.0266 0.0581 0.1001
α,θ,t=1.8,3,1
6 0.0883 3e-04 0.0115 0.0106 0.0409 0.0884 0.1508
7 0.0902 2e-04 0.0118 0.0108 0.0417 0.0901 0.1535
8 0.092 3e-04 0.012 0.011 0.0424 0.0915 0.1559
α,θ,t=2,3,1
6 0.1117 0.001 0.0147 0.0133 0.051 0.1094 0.1855
7 0.1138 0.0015 0.015 0.0135 0.0518 0.1112 0.1884
8 0.1165 0.0023 0.0158 0.0138 0.0541 0.1164 0.1985

ER, estimated risk.

Table 8

Estimated risk for Bayes estimates of R(t).

## ACKNOWLEDGMENT

The authors are thankful for all the valuable suggestions provided by the editor and anonymous referees, which have improved the original manuscript.

## REFERENCES

6.S. Kotz and E. Seier, InterStat., Vol. 1, 2007, pp. 1-15.
7.A.A. Zghoul, Statistica, Vol. 71, 2011, pp. 355-365.
11.M. Ahsanullah, Record Statistics, Nova Science Publishers, Commack, New York, 1995.
12.M. Ahsanullah, Record Values-Theory and Applications, University Press of America, Lanham, Maryland, 2004.
21.Z. Anwar, N. Gupta, M.A.R. Khan, and Q.A. Jamal, J. Mod. Appl. Stat. Methods.
22.M. Arshad and Q.A. Jamal, J. Mod. Appl. Stat. Methods.
24.H.R. Varian, J. Savage, 1975, pp. 195-208.
26.W. James and C. Stein, in Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, Vol. 1, 1961, pp. 361-379.
29.H. Martz and R. Waller, Bayesian Reliability Analysis, John Wiley and Sons, New York, 1982.
Journal
Journal of Statistical Theory and Applications
Volume-Issue
18 - 1
Pages
65 - 78
Publication Date
2019/04/22
ISSN (Online)
2214-1766
ISSN (Print)
1538-7887
DOI
10.2991/jsta.d.190306.008How to use a DOI?
Open Access

TY  - JOUR
PY  - 2019
DA  - 2019/04/22
TI  - Statistical Inference for Topp–Leone-generated Family of Distributions Based on Records
JO  - Journal of Statistical Theory and Applications
SP  - 65
EP  - 78
VL  - 18
IS  - 1
SN  - 2214-1766
UR  - https://doi.org/10.2991/jsta.d.190306.008
DO  - 10.2991/jsta.d.190306.008