# Journal of Statistical Theory and Applications

Volume 17, Issue 1, March 2018, Pages 1 - 14

# Bayesian Estimation of the Scale Parameter of the Marshall-Olkin Exponential Distribution under Progressively Type-II Censored Samples

Authors
Mukhtar M. Salahm.salah@mu.edu.sa
Basic Engineering Science, College of Engineering, Majmaah University Majmaah, Kingdom of Saudi Arabia
Received 14 March 2017, Accepted 5 December 2017, Available Online 31 March 2018.
DOI
10.2991/jsta.2018.17.1.1How to use a DOI?
Keywords
Progressive censoring; approximate maximum likelihood estimator; Bayes estimator; exponential distribution
Abstract

This paper studies the Bayes estimator, the maximum likelihood estimator and the approximate likelihood estimator of the scale parameter for the Marshall-Olkin exponential distribution under the progressive type-II censored sample. All the estimators, Bayes estimator, maximum likelihood estimator and approximate likelihood estimator are presented and derived in simple forms. It observed that the Bayes estimator and the maximum likelihood estimator can not be solved analytically, hence it is solved numerically. Finally the comparison method is presented in order to compare the performance between these estimators.

Open Access
This is an open access article under the CC BY-NC license (http://creativecommons.org/licences/by-nc/4.0/).

## 1 Introduction

Let Z be a random variable from the Marshall-Olkin exponential distribution (MOE) distribution with the scale parameter λ and shape parameter α. The probability density function (pdf) of Z is given as follow

f(z)=αezλλ(1(1α)ezλ)2,z0,α>0andλ>0.
and its cumulative distribution function (cdf) is given as
F(z)=1αezλ(1(1α)ezλ),z0,α>0andλ>0.

The pdf and cdf of standard MOE distribution are given respectively as follows:

f(x)=αex(1(1α)ex)2,0x<,α>0.
F(x)=αex(1(1α)ex),0x<,α>0.
where X=Zλ. Note that when α = 1, in Eqs. (1.3) and (1.4) then the MOE distribution reduces to the standard exponential distribution, and when α = 2, the MOE distribution reduces to the half logistic distribution. The hazard rate h(x) for the MOE distribution, is given by
h(x)=11(1α)ex,x0,α>0.
 showed that when α ≥ 1 then the hazard rate h(x) is increasing and if 0 < α < 1, h(x) is decreasing. So the family of MOE distributions is an increasing failure rate (IFR) family when α ≥ 1 and a decreasing failure rate (DFR) family when 0 < α < 1. For more details see  and .

Censored sampling arises in a life-testing experiment when ever the experimenter doesn’t observe (either intentionally or unintentionally) the failure times of all units placed on a life-test. For example consider a life-testing experiment where n items are kept under observation, these items could be systems, computers, individuals in a clinical trial, in reliability study experiment, so that the removal of units from the experimentation is pre-planned and intentional, and is done in order to provide saving in terms of time and cost associated with testing. The data obtained from such experiments are called censored data. There are many types of censoring scheme, here we mention some of them, let us consider n unites are placed on a life-test then, type-I (time) censoring: Suppose it is decided to terminate the experiment at a pre-determined time t, so that only failure time of these items that failed prior to this time recorded, the data so obtained from this process constitute a type-I censored sample. Type-II censoring: If the experiment is terminated at the rth failure, that is at time Xr:n, we obtain type-II censored sample, here r is fixed, while Xr:n the duration of the experiment is random. Many articles in this literature have discussed inferential method under type-I and type-II censoring for various parametric families of distributions, for more details, see for example, [1, 3, 5, 7, 9,11].

A generalization of type-II censored sample is a progressive type-II censoring: Suppose n units taken from the same population are placed on a life test. At the first failure time of one of the n units, a number R1 of the surviving units is randomly withdrawn from the test, at the second failure time, another R2 surviving units are selected at random and taken out of the experiment, and so on. Finally at the mth failure, the remaining Rm = nmR1R2−...−Rm−1 unit are removed. In this scheme (R1, R2, ..., Rm) is pre-fixed. The resulting m order failure times, which denote by

X1:m:n(R1,R2,,Rm),X2:m:n(R1,R2,,Rm),,Xm:m:n(R1,R2,,Rm),
are referred to as progressive type-II censored order statistics. The special case when R1 = R2 = ... = Rm−1 = 0, so that Rm = nm this scheme reduces to the conventional type-II censoring scheme, also when R1 = R2 = ... = Rm = 0, so that m = n, then no censoring happen ( complete data case). For more details discussion about progressive censoring, one may refer to . If the failure times are based on an absolutely continuos distribution function F with probability density function (pdf) f, the joint probability density function of the progressive censored failure times X1:m:n, X2:m:n, ..., Xm:m:n, is given by
fX1:m:n,,Xm:m:n(x1,x2,,xm)=A(n,m1)i=1mf(xi)[1F(xi)]Ri,<x1<x2<<xm<,
where f(.) and F (.) are, respectively, pdf and (cdf) of the random sample and
A(n,m1)=n(n1R1)(n2R1R2)(nm+1R1Rm1).

The rest of the paper is organized as follows. In Section 2, approximate maximum likelihood estimator (AMLE) of the scale parameter λ of MOE distribution is presented and used as an initial starting points to find the maximum likelihood estimator (MLE) of λ. In Section 3, Bayes estimator of λ is studied and presented. Finally, in Section 4, numerical computations and calculations are presented to compare between these estimators.

## 2 Approximate Maximum Likelihood Estimation

In this section, we derive the AMLEs of the scale parameters λ of the MOE distribution under progressively type-II censored sample by using  algorithm. Let Y1:m:n, Y2:m:n, ..., Ym:m:n denote a progressively type-II censored sample from MOE distribution with pdf and cdf as in Eqs. (1.1) and (1.2) respectively. Let Xi = Yi/λ, i = 1, 2, ..., m, then X′is are simply order statistics from a sample of size n from the standard MOE distribution. One can approximate the function F(xi) by expanding it in a Taylor series around the point E(Xi:m:n) = vi:m:n.

It is known that

F(Xi:m:n)=DUi:m:n,
where Ui:m:n is the ith progressively type-II censored order statistic from uniform U(0, 1) distribution. We then have
Xi:m:n=DF1(Ui:m:n),
with
F1(u)=ln(1(1α)u1u).

Consequently,

νi:m:n=E(Xi:m:n)F1(γi:m:n)ln(1(1α)γi:m:n1γi:m:n),
where γi:m:n = E(Ui:m:n). From , it is known that
γi:m:n=1j=mi+1mj+Rmj+1++Rmj+1+Rmj+1++Rm,i=1,2,,m.

By expanding F(xi) using Taylor series expansion around the point νi:m:n and keeping only the first two terms for approximation we get

F(Xi)F(νi:m:m)+(xiνi:m:n)f(νi:m:n),=wi+δixi,
where
wi=F(νi:m:n)νi:m:nf(νi:m:m),i=1,2,,m
and
δi=f(νi:m:n),i=1,2,,m.
by using Eq. (1.5), one can find the MLE of λ by differentiating Eq. (2.3) with respect to λ and then solving Eq. (2.4) numerically.
lnL=lncmlnαmlnλ+i=1m(Ri+1)ln[1F(Xi)]+i=1mln[1(1α)F(Xi)]
lnLλ=1αλ[αmi=1m(Ri+2α)Xi+i=1m(Ri+2)(1α)F(Xi)Xi]=0.

Upon using Eq. (2.4) and (2.2), the AMLE of λ based on the progressively type-II censored sample can be obtained by solving Eq. (2.5)

lnLλ=1αλ[αmi=1m(Ri+2α)Xi+i=1m(Ri+2)(1α)Xi(wi+δiXi)]=0,
after simplifying Eq. (2.5), we get
Aλ2+Bλ+C=0.

By solving the quadratic Eq. (2.6) with respect to λ, we obtain the AMLE of λ as

λ=B+B24AC2A,
where
A=αm,B=i=1m[(Ri+2)(1α)wi(Ri+2α)]yiC=i=1m(Ri+2)(1α)yi2δi.
The AMLEs of scale parameters of MOE distribution could be used as starting points of the numerical solution in Newton-Raphson method of Eq. (2.4) to find the MLE of λ.

## 3 Bayes Estimation

In this section, we present the derivation for the Bayes estimator for the scale parameter λ of the MOE distribution. To see this, let Let Z1Z2 ≤ ... ≤ Zm be a progressively type-II censored sample from MOE distribution with pdf and cdf as in Eqs. (1.1) and (1.2) respectively. Let us consider the natural conjugate family of the prior distribution for parameter λ as follow:

π(λ)(1λ)a+1ebλ,λ>0,a>0andb>0.

The posterior density of λ is given by combining Eq. (1.5) with Eq. (3.1) as

π(λ|Z)(1λ)m+a+1e1λ[i=1m(Ri+1)Zi+b]i=1m(1(1α)eZiλ)(Ri+2).
The cdf of the Bayes estimator of λ under the square error loss (SEL) is the posterior mean and given by
λB=0λπ(λ|Z)dλ0π(λ|Z)dλ.
To find the Bayes estimator of λ by numerical integration method, we use Eq. (3.2) and (3.3) which is due to the complex form of the likelihood function. To obtain the Bayes estimator of λ, on can use Eq. (3.3) as given in Eq. (3.4)
λB=E(λ|Z)=E*[λi=1m(1(1α)eZiλ)(Ri+2)]E*[i=1m(1(1α)eZiλ)(Ri+2)],
where E* denote the expectation with respect to inverse gamma distribution. Since Eq. (3.4) can not be solved analytically, we use an approximation method for  to find the numerical approximate solution. To do this, we assume
k(λ)=lnπ(λ|Z)λ=(m+a+1)λ+i=1m(Ri+1)Zi+bi=1m(1α)(Ri2)ZieZiλ(1(1α)eZiλ).
From Eq. (3.5) it follows that λ^* is only mode of the posterior density in Eq. (3.2) for simplicity let
k(λ)=Φ(λ)+Ψ(λ),
where
Φ(λ)=(m+a+1)λ+i=1m(Ri+1)Zi+b,
and
Ψ(λ)=i=1m(1α)(Ri2)ZieZiλ(1(1α)eZiλ).
Since Φ(λ) and Ψ(λ) are decreasing and increasing in (0, ∞) respectively. Therefore Eq. (3.5) admits a unique solution for λ^*.

Let L(λ; z) be likelihood function of λ based on n observations and π(λ|z) denote the posterior distribution of λ. Then posterior mean of Φ(λ) is given by

E[Φ(λ)|Z]=Φ(λ)π(λ|z)dλ=enL*(λ)dλenL(λ)dλ,
where
L(λ)=1nlnπ(λ|z)
and
L*(λ)=π(λ)+1nlnπ(λ).

Following , the Eq. (3.6) can be approximated as follow

E[Φ(λ)|Z]=(|ζ*||ζ|)0.5e(n[L*(λ^*)L(λ^)]),=(|ζ*||ζ|)0.5Φ(λ^*)π(λ^*|z)π(λ^|z).
and
2L(λ)λ2=2nλ3[(m+a+1)λ+i=1m(Ri+1)zi+b+i=1m(Ri+2)(1α)zieziλ(1(1α)eziλ)]+1nλ2[(m+a+1)λ+i=1m(Ri+2)(1α)zi2eziλλ2(1(1α)eziλ)2],
2L*(λ)λ2=2L(λ)λ21n(m+a+1)2((m+a+1)λ+i=1m(Ri+1)zi+b)2.
where λ^* and λ^ maximize L* (λ) and L(λ) respectively. ζ* and ζ are minus the inverse of the second derivatives of L* (λ) and L(λ) at λ* and λ^ respectively.

We applying this approximation to get the Bayes estimator of the scale parameter λ as follow

L(λ)=1n[(m+a+1)lnλ1λ(i=1m(Ri+1)zi+b)i=1m(Ri+2)ln(1(1α)eziλ)].
and
L*(λ)=L(λ)+1nlnλ

By substituting Eq. (3.10) and (3.11) in (3.9), the Bayes estimator λ^AB of a function Φ(λ) = λ under the SEL takes of the form

λ^AB=E[λ|Z]={(|ζ*||ζ|)0.5(λ^m+a+1λ^*m+a+1)e([i=1m(Ri+1)Zi+b](1λ^1λ^*))×i=1m(1(1α)eZiλ^)Ri+2(1(1α)eZiλ^*)Ri+2}.

## 4 Numerical Computation

In this section, we present a simulation study and numerical computations to compare the performances of the different estimators, the AMLE’s and Bayes estimator with the MLE’s of λ. To this end, by using the algorithm presented by , we generate progressively type-II censored samples from the standard MOE distribution where λ = 1. We compute the AMLE from Eqs. (2.7). The MLE’s of λ are obtained by solving the nonlinear Eq. (2.4) in which the AMLE was used as a starting values for Newton-Raphson method. The Bayes estimators of λ are obtained by solving Eq. (3.12). All the computation are computed using Mathematica 6.0.1 software package over 4000 Monte Carlo simulations. The simulations are carried out for sample sizes n = 10, 20, 30, 50. Different choices of the effective sample size m, and different progressive censoring schemes are considered. For simplicity in notation, we have used the same notation as in , as ((m − 1) * 0, nm) and (nm, (m − 1) * 0) respectively; for example (5, 4*0) denotes the progressive censoring scheme (5, 0, 0, 0, 0).

We present the results for the AMLEs, the MLEs and Bayes estimator when λ = 1 for some fixed shape parameter α = 1.5, 2, 2.5, 3. in Table(14).

n m Scheme AMLE MLE Bayes
10 5 (4 * 0, 5) 1.2613 1.0459 0.9037
(5, 4 * 0) 1.3789 0.9651 0.7424
(1, 1, 1, 1, 1) 1.3124 1.0200 1.1789
20 5 (4 * 0, 15) 2.4123 0.9965 0.9616
(15, 4 * 0) 2.5789 0.9265 1.0005
(5, 0, 5, 0, 5) 2.4488 1.0008 0.9475
10 (9 * 0, 10) 1.2522 1.0555 0.9104
(10, 9 * 0) 1.3471 0.9873 1.0004
4 * 0, 5, 5, 4 * 0) 1.3099 1.0511 0.9063
15 (5 * 1, 10 * 0) 0.9673 0.9708 0.8797
(10 * 0, 5 * 1) 0.9083 0.9205 0.8353
(5, 14 * 0) 0.9679 0.9678 0.9063
(14 * 0, 5) 0.9574 1.0461 0.9417
30 10 (20, 9 * 0) 1.9691 1.0521 0.9167
(9 * 0, 20) 1.8055 1.0757 0.9214
(3 * 0, 5, 5, 5, 5, 3 * 0)) 1.8864 1.0247 0.8802
15 (15, 14 * 0) 1.3798 1.0824 0.9785
(14 * 0, 15) 1.2456 1.0465 0.9446
20 (10, 19 * 0) 1.0537 0.9748 0.9517
(19 * 0, 10) 1.0067 1.0165 0.9422
25 (5, 24 * 0) 0.9152 1.0231 0.9616
(24 * 0, 5) 0.9078 1.0516 0.9889
50 20 (30, 19 * 0) 1.6283 0.9669 0.8964
(19 * 0, 30) 1.4904 0.9639 0.8908
30 (20, 29 * 0) 1.1372 0.9776 0.9283
(29 * 0, 20) 1.0618 0.9774 0.9281
Table 1:

The AMLE’s, MLE’s and Bayes estimators of the scale parameter λ when the data are simulated from MOE distribution with α = 1.5

n m Scheme AMLE MLE Bayes
10 5 (4 * 0, 5) 0.9386 0.9020 0.9452
(5, 4 * 0) 1.0951 0.9099 0.9574
(1, 1, 1, 1, 1) 1.0868 1.0538 0.8930
20 5 (4 * 0, 15) 1.6860 1.0651 1.0956
(15, 4 * 0) 1.990 1.0275 1.2417
(5, 0, 5, 0, 5) 1.7447 1.0792 1.0861
10 (9 * 0, 10) 0.9941 1.0232 0.9013
(10, 9 * 0) 1.1896 1.0490 0.9239
(4 * 0, 5, 5, 4 * 0) 1.1396 1.0864 0.9503
15 (5 * 1, 10 * 0) 0.9061 1.0133 0.9269
(10 * 0, 5 * 1) 0.9233 1.0780 0.9888
(5, 14 * 0) 0.9795 1.0834 0.9923
(14 * 0, 5) 0.8994 1.0383 0.9544
30 10 (20, 9 * 0) 1.5942 1.1073 0.9745
(9 * 0, 20) 1.2445 1.0301 0.9788
(3 * 0, 5, 5, 5, 5, 3 * 0)) 1.4291 1.0639 0.9290
15 (15, 14 * 0) 1.1135 0.9660 0.9409
(14 * 0, 15) 0.9927 1.0389 0.9505
20 (10, 19 * 0) 0.9603 1.0162 0.9499
(19 * 0, 10) 0.8905 1.0176 0.9534
25 (5, 24 * 0) 0.8628 1.0114 0.9583
(24 * 0, 5) 0.8800 1.0548 0.9527
50 20 (30, 19 * 0) 1.3174 1.0124 0.9467
(19 * 0, 30) 1.1037 1.0036 0.9358
30 (20, 29 * 0) 0.9920 0.9813 0.9386
(29 * 0, 20) 0.8799 0.9625 0.9208
Table 2:

The AMLE’s, MLE’s and Bayes estimators of the scale parameter λ when the data are simulated from MOE distribution with α = 2

n m Scheme AMLE MLE Bayes
10 5 (4 * 0, 5) 0.9035 0.9951 0.8130
(5, 4 * 0) 1.1074 0.9838 0.8435
(1, 1, 1, 1, 1) 1.0596 1.0848 0.8867
20 5 (4 * 0, 15) 1.2635 0.9471 0.9439
(15, 4 * 0) 1.6357 0.9897 0.9587
(5, 0, 5, 0, 5) 1.4150 1.0985 0.9753
10 (9 * 0, 10) 0.9181 1.0214 0.9126
(10, 9 * 0) 1.1171 1.0365 0.9253
(4 * 0, 5, 5, 4 * 0) 1.0253 1.0547 0.9399
15 (5 * 1, 10 * 0) 0.9514 1.0522 0.9724
(10 * 0, 5 * 1) 0.8945 1.0167 0.9426
(5, 14 * 0) 0.9223 0.9991 0.9262
(14 * 0, 5) 0.8543 0.9922 0.9210
30 10 (20, 9 * 0) 1.3269 0.9923 0.9816
(9 * 0, 20) 1.0632 1.0455 0.9238
(3 * 0, 5, 5, 5, 5, 3 * 0)) 1.1806 0.9642 0.9403
15 (15, 14 * 0) 1.0496 0.9737 0.9022
(14 * 0, 15) 0.9034 1.0077 0.9322
20 (10, 19 * 0) 0.9484 1.0021 0.9450
(19 * 0, 10) 0.8769 1.0077 0.9520
25 (5, 24 * 0) 0.9086 1.0429 0.9939
(24 * 0, 5) 0.8777 1.0294 0.9829
50 20 (30, 19 * 0) 1.1446 1.0026 0.9448
(19 * 0, 30) 0.9577 1.0185 0.9563
30 (20, 29 * 0) 0.9853 1.0269 0.9868
(29 * 0, 20) 0.8892 1.0225 0.9829
Table 3:

The AMLE’s, MLE’s and Bayes estimators of the scale parameter λ when the data are simulated from MOE distribution with α = 2.5

n m Scheme AMLE MLE Bayes
10 5 (4 * 0, 5) 0.9059 0.9966 0.8345
(5, 4 * 0) 1.0794 0.9807 0.8140
(1, 1, 1, 1, 1) 0.9935 1.0159 0.8480
20 5 (4 * 0, 15) 1.0716 0.9929 0.8014
(15, 4 * 0) 1.5046 1.0134 0.8309
(5, 0, 5, 0, 5) 1.2137 1.0464 0.8563
10 (9 * 0, 10) 0.9093 1.0437 0.9406
(10, 9 * 0) 1.0836 1.0194 0.9181
(4 * 0, 5, 5, 4 * 0) 1.0389 1.0694 0.9616
15 (5 * 1, 10 * 0) 0.9782 1.0469 0.9757
(10 * 0, 5 * 1) 0.9318 1.0303 0.9623
(5, 14 * 0) 0.9382 0.9909 0.9248
(14 * 0, 5) 0.9218 1.0253 0.9589
30 10 (20, 9 * 0) 1.3086 1.0526 0.9552
(9 * 0, 20) 0.9542 1.0331 0.9225
(3 * 0, 5, 5, 5, 5, 3 * 0)) 1.1440 1.0475 0.9331
15 (15, 14 * 0) 1.0708 1.0157 0.9468
(14 * 0, 15) 0.9047 1.0250 0.9551
20 (10, 19 * 0) 0.9276 0.9700 0.9202
(19 * 0, 10) 0.8942 1.0069 0.9566
25 (5, 24 * 0) 0.9218 1.0246 0.9809
(24 * 0, 5) 0.9069 1.0215 0.9799
50 20 (30, 19 * 0) 1.1299 0.9929 0.9411
(19 * 0, 30) 0.8886 0.9968 0.9417
30 (20, 29 * 0) 0.9618 0.9897 0.9545
(29 * 0, 20) 0.8584 0.9802 0.9462
Table 4:

The AMLE’s, MLE’s and Bayes estimators of the scale parameter λ when the data are simulated from MOE distribution with α = 3

Finally, we present here an example for simulated data from MOE distribution to see the performance of the different estimators of the scale parameter λ from the MOE distribution.

### Example 4.1

A progressively type-II censored sample of size m = 10 and a complete sample size n = 31 from MOE distribution with λ = 2 and censoring scheme (1, 2, 3, 4, 5, 0, 0, 0, 0, 6) was generating using Balakrishnan and Sandhu(1995) algorithm. The generated progressively type-II sample is

{0.321312,0.352673,0.838508,1.57235,1.5746,2.07522,2.20029,3.348,4.32915,4.36173}
it found that AMLE is 1.9938, the MLE is 1.9986 and finally Bayes estimator is 1.8041. Its observed that the MLE is the closest estimator to the scale parameter λ = 2 but the Bayes estimator is slightly far from λ = 2.

## 5 Conclusion

This paper studied the estimators of the unknown scale parameter λ under progressively type-II censored samples from the MOE distribution. It is observed that the MLE and Bayes estimators cannot be solved analytically. The AMLE is used as starting point in finding the MLE and  method is used to find the numerical approximate solution of Bayes estimator. It found that the performance of the MLE and the AMLE are very closed to each other but the Bayes estimator is slightly far from the MLE and AMLE.

## Acknowledgement

The author would like to thank the Deanship of Scientific Research at Majmaah University for supporting this work. Also the author would like to thank the editors for their cooperation and grateful to the anonymous referee for a careful checking of the details and for helpful comments that improved this paper.