# Journal of Statistical Theory and Applications

Volume 17, Issue 1, March 2018, Pages 39 - 58

# A New Method for Generating Discrete Analogues of Continuous Distributions

Authors
M. Ganjimganji@uma.ac.ir, F. Ghararif.gharari@uma.ac.ir
Department of statistics, University of Mohaghegh Ardabili, Ardabil, Iran.
Received 9 May 2017, Accepted 12 September 2017, Available Online 31 March 2018.
DOI
10.2991/jsta.2018.17.1.4How to use a DOI?
Keywords
Discrete fractional calulus; Time scale; Gamma distribution
Abstract

In this paper we use discrete fractional calculus for showing the existence of delta and nabla discrete distributions and then apply time scales for definition of delta and nabla discrete gamma distributions. The main result of this paper is unification of the continuous and discrete gamma distributions, which is at the same time a distribution to so-called time scale. Also, starting from the Laplace transform on time scales, we develop concept of moment generating function for these distributions.

Open Access
This is an open access article under the CC BY-NC license (http://creativecommons.org/licences/by-nc/4.0/).

## 1 Introduction

One of the active areas of research in statistics is to model discrete life time data by developing discretized version of suitable continuous lifetime distributions. The discretization of a continuous distribution using different methods has attracted renewed attention of researchers in last few years, for example, see [3, 8, 12, 13, 14, 15, 16, 17]. Recently, these different methods are classified based on different criteria of discretization in detail by Chakraborty [9].

In this article, we present a new method for discretization of most of continuous distributions, where their probability density functions (pdfs) consist of the monomial Taylor and exponential function, and as an example we do discretization for gamma distribution with this method. Our discretization method, in comparison with prior methods for discretization of continuous distributions, has two main advantages. First, for a given continuous distribution, it is possible to generate two types (delta and nabla types) of corresponding discrete distributions. Second, the uni cation of the continuous distribution and corresponding discrete distributions, which is at the same time a distribution to the case of a time scale. We use discrete fractional calculus for showing the existence of delta and nabla discrete distributions and then apply time scales for definition of delta and nabla discrete distributions and as an unification theory under which continuous and discrete distributions are subsumed.

The article is organized as follows: The second section contains summary of some notations and definitions in delta and nabla calculus, also the definitions of delta and nabla Riemann right fractional sums and differences. In the third section, we use discrete fractional calculus for showing the existence of delta and nabla discrete distributions. In the fourth and fifth sections, we define novel types of moments for delta and nabla discrete distributions and provide a method for obtaining these moments using the Laplace transform on the discrete time scale. The sixth section contains an unification of ordinary moments and delta and nabla moments, also types of the mgfs. In section 7, delta and nabla discrete gamma distributions is defined. While section 8 contains an unification of discrete and continuous gamma distributions. In the final section an application of the proposed distribution is presented.

## 2 Preliminaries

In this section, we provide a collection of definitions and related results which are essential and will be used in the next discussions. As mentioned in [5, 6], the definitions and theorem are as following.

A time scale π is an arbitrary nonempty closed subset of the real numbers β. The most well-known examples are π = β and π = β€. The forward (backward) jump operator is defined by Ο(t) := inf{s β π : s > t} (Ο(t) := sup{s β π : s < t}), where infβ := supπ and supβ := infπ. A point t β π is said to be right-dense if t < supπ and Ο(t) = t (left-dense if t > infπ and Ο(t) = t), right-scattered if Ο(t) > t (left-scattered if Ο(t) < t). The forward (backward) graininess function ΞΌ : π β [0, β) (Ξ½ : π β [0, β)) is defined by ΞΌ(t) := Ο(t) β t (Ξ½(t) := t β Ο(t)). More generally, we will denote all Ο(t), Ο(t) and t with Ξ·(t).

### Definition 2.1.

A function f : π β β is called regulated if its right-sided limits exist at all right-dense points in π and its left-sided limits exist at all left-dense points in π.

### Definition 2.2.

A function f : π β β is called rd-continuous (ld-continuous) if it is continuous at right-dense (left-dense) points in π and its left-sided (right-sided) limits exist at left-dense (right-dense) points in π.

The set πk (πk*) is derived from the time scale π as follows: If π has a left-scattered maximum (right-scattered minimum) m, then πk := π β {m} (πk*:=πβ{m}). Otherwise, πk := π (πk*:=π).

### Definition 2.3.

A function f : π β β is said to be delta (nabla) differentiable at a point t β πk (tβπk*) if there exists a number fβ(t) (fβ(t)) with the property that given any Ο΅ > 0, there exists a neighborhood U of t such that

|f(Ο(t))βf(s)βfΞ(t)(Ο(t)βs)|β€Ι|Ο(t)βs|(|f(Ο(t))βf(s)βfβ(t)(Ο(t)βs)|β€Ι|Ο(t)βs|)
for all s β U.

For a function f : π β β it is possible to introduce a derivative fβ(t) (fβ(t)) and an integral β«abf(t)Ξt (β«abf(t)βt) in such a manner that fβ(t) = fβ² (fβ(t) = fβ²) and β«abf(t)Ξt=β«abf(t)dt (β«abf(t)βt=β«abf(t)dt) in the case π = β and fβ(t) = βf (fβ(t) = βf) and β«abf(t)Ξt=βabβ1f(t) (β«abf(t)βt=βa+1bf(t)) in the case π = β€, where, the forward and backward difference operators are defined by βf = f(t + 1) β f(t) and = f(t) β f(t β1), respectively. Also, we define the iterated operators βn = β(βnβ1) and βn = β(βnβ1) for n β β.

### Definition 2.4.

A function p : π β β is called ΞΌβregressive (Ξ½βregressive) provided 1 + ΞΌ(t)p(t) β  o (1 β Ξ½(t)p(t) β  o) for all t β πk (tβπk*).

The set βΞΌ (βΞ½) of all ΞΌβregressive and rd-continuous (Ξ½βregressive and ld-continuous) functions forms an Abelian group under the circle plus addition β defined by (p β q)(t) := p(t) + q(t) + ΞΌ(t)p(t)q(t) ((p β q)(t) := p(t) + q(t) β Ξ½(t)p(t)q(t)) for all t β πk (tβπk*). The additive inverse βp of p β βΞΌ (p β βΞ½) is defined by

(βp)(t):=βp(t)1+ΞΌ(t)p(t)βββββββββββββββ((βp)(t):=βp(t)1βΞ½(t)p(t))
for all t β πk (tβπk*).

For real numbers a and b we denote βa = {a, a + 1, ...} and bβ = {b, b β 1, ...}.

### Theorem 2.5.

Let p β βΞΌ (p β βΞ½) and t0 β π be a fixed point. Then the delta (nabla) exponential function ep(., t0) (ep*(.,t0)) is the unique solution of the initial value problem

yΞ=p(t)y,βββββy(t0)=1βββββββ(yβ=p(t)y,βββββy(t0)=1).

If π = βa, when p(t) β‘ p, where p β βΞΌ (p β βΞ½ = β\{1}) and t0 = a, it is easy to see that ep(t, a) = (1+p)tβa (ep*(t,a)=(1βp)aβt) and if π = β, ep(t, a) = ep(tβa) (ep*(t,a)=ep(tβa)), where βeβ is ordinary exponential function. Moreover, in the special case, e1(t, 0) = 2t (e12*(t,0)=2t). More generally, we will denote all ep(t, a), ep*(t,a) and ep(tβa) with Γͺp(t, a).

### Definition 2.6.

The delta (nabla) Taylor monomials are the functions hn : π Γ π β β, n β β0, and are defined recursively as follows:

h0(t,s)=1,ββββββhn+1(t,s)=β«sthn(Ο,s)ΞΟ,βββββββββββt,sβπ
(h0*(t,s)=1,ββββββhn+1*(t,s)=β«sthn*(Ο,s)βΟβββββββββt,sβπ).

We consider three cases for the time scale π.

1. (a)

If π = β, then Ο(t) = Ο(t) = t and the Taylor monomials can be written explicitly as

hn(t,s)=hn*(t,s)=(tβs)nn!,βββββββββt,sββ,ββββββnββ0.

For each Ξ± β β \ {ββ} define the Ξ±βth Taylor monomial to be

hΞ±(t,s)=(tβs)Ξ±Ξ(Ξ±+1),
and Ξ denoted the special gamma function.

In this paper, we only consider the special case, hΞ±(t):=hΞ±(t,0)=tΞ±Ξ(Ξ±+1)as Taylor monomial (tm).

2. (b)

If π = β€, then Ο(t) = t + 1 and the Taylor monomials can be written explicitly as

hn(t,s)=(tβs)n_n!,βββββββββt,sββ€,ββββnββ0,
where tn_=Ξ j=0nβ1(tβj)=Ξ(t+1)Ξ(t+1βj) and product is zero when t + 1 β j = 0 for some j. More generally, for Ξ± arbitrary define tΞ±_=Ξ(t+1)Ξ(t+1βΞ±), where the convention of that division at pole yields zero. This generalized falling function allows us to extend (2.5) to define a general Taylor monomial that will serve us well in the probability distributions setting.

For each Ξ± β β \ {ββ}, define the delta Ξ±βth Taylor monomial to be

hΞ±(t,s)=(tβs)Ξ±_Ξ(Ξ±+1).

In this paper, we only consider the special case hΞ±_(t):=hΞ±(t,0)=tΞ±_Ξ(Ξ±+1) as delta Taylor monomial (dtm).

3. (c)

If π = β€, then Ο(t) = t β 1 and the Taylor monomials can be written explicitly as

hn(t,s)=(tβs)nΒ―n!,ββββββββt,sββ€,ββnββ0,
where tnΒ―=Ξ j=0nβ1(t+j)=Ξ(t+n)Ξ(t). More generally, for any real number Ξ± rising function is defined as tΞ±Β―=Ξ(t+Ξ±)Ξ(t) where t β β \ {ββ0} and 0Ξ±Β―=0. This function allows us to extend (2.7) in order to define a general Taylor monomial that will serve us well in the probability distributions setting.

For each Ξ± β β \ {ββ} define the nabla Ξ±βth Taylor monomial to be

hΞ±*(t,s)=(tβs)Ξ±Β―Ξ(Ξ±+1).

In this paper, we only consider the special case hΞ±Β―(t):=hΞ±*(t,0)=tΞ±Β―Ξ(Ξ±+1) as nabla Taylor monomial (ntm).

More generally, we will denote all hΞ±_(t), hΞ±Β―(t) and hΞ±(t) with Δ₯Ξ± (t).

### Definition 2.7.

The delta (nabla) Laplace transform of a regulated function f : πa β β is given by

La{f}(s)=β«aβeβs(Ο(t),a)f(t)Ξtβββββββ(La*{f}(s)=β«aβeβs*(Ο(t),a)f(t)βt),
for all s β π{f}, where a β β is fixed, πa is an unbounded time scale with infimum a and π{f } is the set of all regressive complex constants for which the integral converges. In the special case, when π = β, every function is regulated and its delta (nabla) discrete Laplace transform can be written as
La{f}(s)=βt=aβ(11+s)Ο(t)f(t)ββββββββββ(La*{f}(s)=βt=aβ(1βs)Ο(t)f(t)).

Let b be a real number and f : bβ β β. The delta Riemann left fractional sum of order Ξ± > 0 is defined by Abdeljawad [1] as

ΞβΞ±f(t)=1Ξ(Ξ±)βs=t+Ξ±b(Ο(s)βt)Ξ±β1_f(s),ββββββββββββtββbβΞ±.
We define the nabla Riemann right fractional sum of order Ξ± > 0 as
ββΞ±f(t)=1Ξ(Ξ±)βs=tbβ1(Ο(s)βt)Ξ±β1Β―f(s),ββββββββββββtββbβ1.
The delta Riemann right fractional difference of order Ξ± > 0 is defined by Abdeljawad [1] as
ΞΞ±f(t)=(β1)nβnΞβ(nβΞ±)f(t),
for t β bβ(nβΞ±)β and n = [Ξ±] + 1 where [Ξ±] is the greatest integer less than Ξ±. Also, the nabla Riemann right fractional difference of order Ξ± > 0 is defined by
βΞ±f(t)=(β1)nΞnββ(nβΞ±)f(t),
for t β bβnβ.

In [2], author obtained the following alternative definition for delta Riemann right fractional difference

ΞΞ±f(t)=1Ξ(βΞ±)βs=tβΞ±b(Ο(s)βt)βΞ±β1_f(s).
Similarly, we can prove the following formula for nabla Riemann right fractional difference
βΞ±f(t)=1Ξ(βΞ±)βs=tbβ1(Ο(s)βt)βΞ±β1Β―f(s).
For an introduction to discrete fractional calculus the reader is referred to [11].

## 3 Generating discrete distributions by discrete fractional calculus

The following results show the relationship between continuous and discrete fractional calculus and statistics and also allows us to define different types of discrete distributions. Suppose that X is a positive continuous random variable. The expectation of the tm function, hΞ±β1(X), coincides with Riemann-Liouville right fractional integral of the pdf at the origin for Ξ± > 0 and Marchaud right fractional derivative of the pdf at the origin for β1 < Ξ± < 0, that is, we have

E[hΞ±β1(X)]={(IβΞ±f)(0),Ξ±>0(DβΞ±f)(0),β1<Ξ±<0
where
(IβΞ±f)(t)=1Ξ(Ξ±)β«0βxΞ±β1f(x+t)dx
is the Riemann-Liouville right fractional integral, while
(DβΞ±f)(t)=1Ξ(βΞ±)β«0βxβΞ±β1{f(x+t)βf(t)}dx
is the Marchaud left fractional derivative[10].

It can be seen that the limits of the above integrals equal to the support of random variable X. Considering this point, we present the following theorems for discrete random variable X.

### Theorem 3.1.

Suppose that X is a discrete random variable. The expectation of the dtm function, hΞ±β1_(X), coincides with delta Riemann right fractional sum of the pmf at β1 for Ξ± > 0 and delta Riemann right fractional difference of the probability mass function (pmf) at β1 for Ξ± < 0, Ξ± β {ββ}, i.e.

E[hΞ±β1_(X)]={(ΞβΞ±f)(β1),Ξ±>0(ΞΞ±f)(β1),Ξ±<0,ββΞ±β{β¦,β2,β1}
where
(ΞβΞ±f)(t)=βx=Ξ±β1bβ1βtxΞ±β1_Ξ(Ξ±)f(x+t+1)
is the delta Riemann right fractional sum, while
(ΞΞ±f)(t)=βx=βΞ±β1bβtβ1xβΞ±β1_Ξ(βΞ±)f(x+t+1)
is the delta Riemann right fractional difference.

### Proof.

For Ξ± > 0, substitute x = Ο(s) β t in the expression (2.10) and also for Ξ± < 0 and Ξ± β {β1, β2, ...}, in the expression (2.12).

Here, considering the limits of summation we can define the discrete distributions with the support βΞ±β1 or a finite subset of it. In this case, we will call X, delta discrete random variable. As an example, we will define the delta discrete gamma distribution. Another example is the delta discrete uniform distribution, DU {Ξ± β 1, Ξ±, ..., Ξ± + Ξ²}, where Ξ± β β and Ξ² β ββ1.

### Theorem 3.2.

We suppose that X is a discrete random variable. The expectation of the ntm function, hΞ±β1Β―(X), coincides with nabla Riemann right fractional sum of the pmf at 1 for Ξ± > 0 and nabla Riemann right fractional difference of the pmf at 1 for Ξ± < 0, Ξ± β {ββ}, i.e.

E[hΞ±β1Β―(X)]={(ββΞ±f)(1),Ξ±>0(βΞ±f)(1),Ξ±<0,ββΞ±β{β¦,β2,β1}
where
(ββΞ±f)(t)=βx=1b+1βtxΞ±β1Β―Ξ(Ξ±)f(x+tβ1)
is the nabla Riemann right fractional sum, while
(βΞ±f)(t)=βx=1b+1βtxβΞ±β1Β―Ξ(βΞ±)f(x+tβ1)
is the nabla Riemann right fractional difference.

### Proof.

For Ξ± > 0, substitute x = Ο(s) β t in the expression (2.11) and also for Ξ± < 0 and Ξ± β {β1, β2, ...}, in the expression (2.13).

Therefore, considering the limits of summation in recent theorem, we can define the discrete distributions with support β1 or a finite subset of it. In this case, we will call X, nabla discrete random variable. In this work, we will define the nabla discrete gamma distribution. Another example is the nabla discrete uniform distribution, DU {1, 2, ..., Ξ± β Ξ² + 1}, where Ξ± β β and Ξ² β Ξ±β.

## 4 Nabla moments and nabla moment generating function

In this section, we define novel types of moments for delta and nabla discrete distributions and provide a method for obtaining these moments using the Laplace transform on the discrete time scale.

It is well known that the Laplace transform of pdf is the moment generating function (mgf), which is defined as MX(βt)=E[eβtX]=β«0βeβtxf(x)dx, and X is a non- negative real-valued random variable and t is a complex variable with non-negative real part. On the other hand, it can be easily seen that MX(βt)=βk=0βE[(βX)k]tkk!. This function generates the moments of integer order of random variable X as ΞΌk=E[Xk]=(β1)kdkMX(βt)dtk|t=0.

Now, suppose that X is a delta discrete random variable with values x = βΞ±β1, Ξ± > 0. The delta discrete Laplace transform of pmf of X is defined as

βx=Ξ±β1β(11+t)Ο(x)f(x)=E[(1+t)βΟ(x)]=E[eβt(Ο(X),0)]=MΟ(X)(βt),
and t is the set of all regressive complex constants for which the series converges. By using the series expansion for (1 + t)βx it can be easily proved that MΟ(X)(βt)=βk=0βE[(β1)k(Ο(X))kΒ―]tkk!. This function generates the nabla moments of integer order of X as E[(Ο(X))KΒ―]=(β1)kdkMΟ(X)(βt)dtk|t=0.

### Definition 4.1.

Let X be a delta discrete random variable with pmf f.

1. (a)

Its kβth nabla moment, is denoted by ΞΌkβ and is defined by ΞΌkβ:=βx(Ο(X))kΒ―f(x).

2. (b)

The nabla mgf of X is given by MΟ(X)(t)=E[et*(Ο(X),0)].

### Theorem 4.2.

Let X be a delta discrete random variable with nabla moments ΞΌkβ. we have

MΟ(X)(t)=βk=0βE[(Ο(X))KΒ―]tkk!.
In particular,
E[(Ο(X))KΒ―]=dkMΟ(X)(t)dtk|t=0.

### Proof.

For the proof of (4.1), we use the series expansion for function (1 β t)βx, that is βk=0βxk_tkk!. we have

MΟ(X)(t)=βxβk=0β(Ο(X))kΒ―tkk!f(x)=βk=0β(βx(Ο(X))kΒ―f(x))tkk!.
For the proof (4.2), differentiate MΟ(X)(t) a total of k times. Since the only tβ dependence in the summation is the (1 β t)βΟ(x) factor, we have
dkdtkMΟ(X)(t)=βx[dkdtk(1βt)βΟ(x)]f(x)=βx(Ο(X))kΒ―(1βt)β(Ο(x)+k)f(x),
the claim now follows from taking t = 0 and recalling the definition of the nabla moments.β‘

## 5 Delta moments and delta moment generating function

Suppose that X is a nabla discrete random variable with values x = β1. The nabla discrete Laplace transform of pmf of X is defined as

βx=1β(1βt)Ο(x)f(x)=E[(1βt)Ο(x)]=E[eβt*(Ο(X),0)]=MΟ(X)(βt)
and t is the set of all regressive complex constants for which the series converges. By using the series expansion for (1 β t)x, it can be easily proved that MΟ(X)(βt)=βk=0βE[(β1)k(Ο(X))k_]tkk!. This function generates the delta moments of integer order of X as E[(Ο(X))K_]=(β1)kdkMΟ(X)(βt)dtk|t=0.

### Definition 5.1.

Let X be a nabla discrete random variable with pmf f.

1. (a)

Its kβth delta moment, denoted by ΞΌkΞ and is defined by ΞΌkΞ:=βx(Ο(X))k_f(x).

2. (b)

The delta mgf of X is given by MΟ(X)(t) = E[et(Ο(X), 0)].

### Theorem 5.2.

Let X be a nabla discrete random variable with delta moments ΞΌkΞ. we have

MΟ(X)(t)=βk=0βE[(Ο(X))K_]tkk!.
In particular,
E[(Ο(X))K_]=dkMΟ(X)(t)dtk|t=0.

### Proof.

The proof is similar to the proof of theorem 4.2. We only outline this point that (1+t)x=βk=0βxk_tkk!

More generally, we denote all the ΞΌkΞ, ΞΌkβ and Β΅k with ΞΌ^k and also all the MΟ(X)(t), MΟ(X)(t) and MX(t) with MΞ·(X)(t), which are useful notations in statistics.

## 6 Unification of the mgf and moments

For a given time scale π, we present the construction of moments and the mgfs on time scales as

ΞΌ^k=E[Ξ(k+1)h^k(Ξ·(X))]ββββββandββββββββββMΞ·(X)(βt)=E[e^βt(Ξ·(X),0)],ββββββββxβπ
, respectively. In order that, the reader sees how ordinary moments and delta and nabla moments, also types of the mgfs follow from (6.1), it is only at this point necessary to know that
h^k(x)=hk(x)=xkΞ(k+1),ββββββββΞ·(x)=Ο(x)=Ο(x)=xβββandββe^βt(Ξ·(X),0)=eβtx,
if π = β+,
h^k(x)=hk_(x)=xk_Ξ(k+1),βββββββββΞ·(x)=Ο(x)ββββββandββββββββe^βt(Ξ·(X),0)=(1+t)βΟ(x),
if π = βΞ±β1, Ξ± > 0 and
h^k(x)=hkΒ―(x)=xkΒ―Ξ(k+1),βββββββββΞ·(x)=Ο(x)ββββββandββββββββe^βt(Ξ·(X),0)=(1βt)Ο(x),
if π = β1.

## 7 The delta and nabla discrete gamma distributions

In this section, we will introduce delta and nabla discrete gamma distributions, by substituting continuous Taylor monomials and exponential functions with their corresponding discrete types (on the discrete time scale) in continuous gamma distribution.

## 7.1 The delta discrete gamma distribution

Definition 7.1. It is said that the random variable X has a delta discrete gamma distribution with (Ξ±, Ξ²) parameters if its pmf is given by

Pr[X=x]=hΞ±β1_(x)Ξ²Ξ±eΞ²(Ο(x),0)=xΞ±β1_Ξ²Ξ±Ξ(Ξ±)(1+Ξ²)Ο(x),ββββββββββx=βΞ±β1,
where Ξ± > 0, Ξ² > 0 and it denotes as Ξβ(Ξ±, Ξ²).

β  Particular cases:

1. (a)

For Ξ± = 1, Ξβ(Ξ±, Ξ²) in (7.1) reduces to a one parameter delta discrete gamma or delta exponential distribution, Ξβ(1, Ξ²) β‘ Eβ(Ξ²) with pmf

Pr[X=x]=Ξ²(1+Ξ²)βΟ(x)=(Ξ²1+Ξ²)(11+Ξ²)x,βββββββββx=0,1,β¦.

Obviously, this is the pmf of geometric distribution (the number of failures for first success).

2. (b)

For Ξ± = n, n β β, Ξβ(Ξ±, Ξ²) in (7.1) is a delta discrete Erlang distribution Ξβ(n, Ξ²) with pmf

Pr[X=x]=(xβn+1x)Ξ²n(1+Ξ²)βΟ(x),ββββββββx=βnβ1.

If we substitute Ο(x) = x, (7.2) and (7.3) are given by

Pr[X=x]=(Ξ²1+Ξ²)(11+Ξ²)xβ1,βββββββx=1,2,β¦,
and
Pr[X=x]=(xβnxβ1)(Ξ²1+Ξ²)n(11+Ξ²)xβn,βββββββx=n,n+1,β¦,
respectively. It can be seen that (7.5) is the same negative binomial distribution (the number of independent trials required for n successes) and (7.4) is the same geometric distribution (the number of independent trials required for first success). Therefore, we call (7.3) the delta negative binomial distribution and its special case (7.2) is the delta geometric distribution. Then, the delta discrete exponential distribution is the same delta geometric distribution.

3. (c)

For Ξ±=n2, n β β, and Ξ²=12, Ξβ(Ξ±, Ξ²) in (7.1) is delta discrete Chi-square distribution, Ο2β with pmf

Pr[X=x]=xn2β1Ξ(n2)2n2(23)Ο(x),βββββββββββββββx=n2β1,n2,β¦.

In the special case n = 2, we obtain delta discrete exponential distribution, i.e.

Pr[X=x]=(12)(32)βΟ(x),ββββββββββββx=0,1,β¦.

β  Statistical properties:

### Theorem 7.2.

If X ~ Ξβ(Ξ±, Ξ²), then the expectation, variance and delta moment generating function of the random variable X are given by

E[X]=Ξ±(1+Ξ²)Ξ²β1β1,
Var(X)=Ξ±(1+Ξ²)Ξ²β2,
MΟ(X)(t)=(11βt(1+Ξ²)Ξ²β1)Ξ±.

### Proof.

We have

E[X]=βx=Ξ±β1βxxΞ±β1_Ξ²Ξ±Ξ(Ξ±)(1+Ξ²)Ο(x),
by use of the relation (xβΞ±+1)xΞ±β1_=xΞ±_, we have
E[X]=Ξ²Ξ±Ξ(Ξ±)βx=Ξ±β1βxΞ±_(1+Ξ²)Ο(x)+(Ξ±β1).
On the other hand, we have
βx=Ξ±β1βxΞ±_(1+Ξ²)Ο(x)=βx=Ξ±β(11+Ξ²)Ο(x)Ξ(x+1)Ξ(x+1βΞ±)=(11+Ξ²)1+Ξ±βx=0β(11+Ξ²)xΞ(x+Ξ±+1)Ξ(x+1),
Now, we apply theorem 2.2.1 from [4] to the right-hand side and introduce the hyper-geometric function 2F1 in the following way
βx=Ξ±β1βxΞ±_(1+Ξ²)Ο(x)=(11+Ξ²)1+Ξ±Ξ(1+Ξ±)F21(1,1+Ξ±;1;11+Ξ²),
by using exercise 4.2.10 from [7] to the right-hand side, we have
βx=Ξ±β1βxΞ±_(1+Ξ²)Ο(x)=(11+Ξ²)1+Ξ±1Ξ(βΞ±)β«01xΞ±(1βx)βΞ±β1(1βx1+Ξ²)β1dx=(11+Ξ²)Ξ±1Ξ(βΞ±)β«01(1βx)Ξ±xβΞ±β1x+Ξ²dx=(11+Ξ²)Ξ±1Ξ(βΞ±)B(1+Ξ±,βΞ±)(1+Ξ²)Ξ±Ξ²β(Ξ±+1)=Ξ(Ξ±+1)Ξ²Ξ±+1,
where B(., .) is the ordinary beta function. Then we have
βx=Ξ±β1βxΞ±_(1+Ξ²)Ο(x)=Ξ(Ξ±+1)Ξ²Ξ±+1,
and which complete the proof of (7.8). For the proof of (7.9) we have
E[X2]=βx=Ξ±β1βx2xΞ±β1_Ξ²Ξ±Ξ(Ξ±)(1+Ξ²)Ο(x)=Ξ²Ξ±Ξ(Ξ±)βx=Ξ±β1β(11+Ξ²)Ο(x)x(xΞ±_+(Ξ±β1)xΞ±β1_)=Ξ²Ξ±Ξ(Ξ±)βx=Ξ±β1β(11+Ξ²)Ο(x)xxΞ±_+(Ξ±β1)(Ξ±(1+Ξ²β1)β1),
since (1+x)xΞ±_=(1+x)Ξ±+1_, which implies that
βx=Ξ±β1βxxΞ±_(1+Ξ²)Ο(x)=βx=Ξ±β1β(1+x)1+Ξ±_(1+Ξ²)Ο(x)ββx=Ξ±β1βxΞ±_(1+Ξ²)Ο(x),
and by considering (7.11), we get
βx=Ξ±β1β(1+x)1+Ξ±_(1+Ξ²)Ο(x)=(1+Ξ²)Ξ(Ξ±+2)Ξ²Ξ±+2,
and from which, we have
E[X2]=2Ξ±(Ξ±β1)Ξ²β1+Ξ±(Ξ±+1)Ξ²β2+(Ξ±β1)2.
Also, with
MΟ(X)(t)=βx=Ξ±β1β(11βt)Ο(x)xΞ±β1_Ξ²Ξ±Ξ(Ξ±)(1+Ξ²)Ο(x)=βx=Ξ±β1βxΞ±β1_Ξ²Ξ±Ξ(Ξ±)((1βt)(1+Ξ²))Ο(x)=βx=Ξ±β1βxΞ±β1_Ξ²Ξ±Ξ(Ξ±)((1+(Ξ²βt(1+Ξ²)))Ο(x)=(11βt(1+Ξ²β1))Ξ±,
the proof is complete.

Also, it is easily seen from (7.11) that

βx=Ξ±β1βxΞ±β1_Ξ²Ξ±Ξ(Ξ±)(1+Ξ²)Ο(x)=1,ββββββββββΞ±>0,ββΞ²>0.

β  Maximum Likelihood Estimation (MLE):

Let x1, x2, ..., xn be a random sample. If this sample are assumed to be independently and identically distributed (iid) random variables following Ξβ(Ξ±, Ξ²) distribution, then the likelihood function of the sample will be

L=Ξ²nΞ±Ξ xiΞ±β1_Ξn(Ξ±)(1+Ξ²)n+Ξ£xiI{Ξ±β1,Ξ±,β¦}(xi).

## 7.2 The nabla discrete gamma distribution

### Definition 7.3.

It is said that the random variable X has a nabla discrete gamma distribution with (Ξ±, Ξ²) parameters if its pmf is given by

Pr[X=x]=hΞ±β1Β―(x)Ξ²Ξ±eΞ²*(Ο(x),0)=xΞ±β1Β―Ξ²Ξ±(1βΞ²)Ο(x)Ξ(Ξ±),βββββββββββx=β1,
where Ξ± > 0, 0 < Ξ² < 1 and it denotes as Ξβ(Ξ±, Ξ²).

β  Particular cases:

1. (a)

For Ξ± = 1, Ξβ(Ξ±, Ξ²) in (7.13) reduces to a one parameter nabla discrete gamma or nabla exponential distribution, Ξβ(1, Ξ²) β‘ Eβ(Ξ²) with pmf

Pr[X=x]=Ξ²(1βΞ²)Ο(x),βββββββββββx=1,2,β¦.
Obviously, this is the pmf of geometric distribution (the number of independent trials required for first success).

2. (b)

For Ξ± = n, n β β, Ξβ(Ξ±, Ξ²) in (7.13) is an nabla discrete Erlang distribution Ξβ(n, Ξ²) with pmf

Pr[X=x]=(x+nβ2xβ1)Ξ²n(1βΞ²)Ο(x),βββββββββββx=β1.
If we substitute Ο(x) = x, (7.14) and (7.15) are given by
Pr[X=x]=Ξ²(1βΞ²)x,ββββββββx=0,1,β¦,.
and
Pr[X=x]=(x+nβ1x)Ξ²n(1βΞ²)x,βββββββββββx=0,1,β¦,
respectively. It can be seen that (7.17) is the same negative binomial distribution (the number of failures for n successes) and (7.16) is the same geometric distribution (the number of failures for first success). Then, we call (7.15) the nabla negative binomial distribution and its special case (7.14) is the nabla geometric distribution. Therefore the nabla discrete exponential distribution is the same nabla geometric distribution.

3. (c)

For Ξ±=n2, n β β and Ξ²=12, Ξβ(Ξ±, Ξ²) in (7.13) is nabla discrete Chi-square distribution, Ο2β with pmf

Pr[X=x]=xn2β1Β―Ξ(n2)2n2+Ο(x),ββββββββββββx=1,2,β¦.
In the special case n = 2, we obtain nabla discrete exponential distribution, i.e.
Pr[X=x]=(12)x,βββββββββx=1,2,β¦.

β  Statistical properties:

### Theorem 7.4.

If X ~ Ξβ(Ξ±, Ξ²), then the expectation, variance and nabla moment generating function of the random variable of X are given by

E[X]=Ξ±(1βΞ²)Ξ²β1+1,
Var(X)=Ξ±(1βΞ²)Ξ²β2,
MΟ(X)(t)=(11βt(1βΞ²)Ξ²β1)Ξ±.

### Proof.

We have

E[X]=βx=1βxxΞ±β1Β―Ξ²Ξ±(1βΞ²)Ο(x)Ξ(Ξ±),
since (x+Ξ±β1)xΞ±β1Β―=xΞ±Β―, it results
E[X]=Ξ²Ξ±Ξ(Ξ±)βx=1βxΞ±Β―(1βΞ²)Ο(x)+(1βΞ±).
On the other hand, with similar method for the proof of theorem 7.2, we have
βx=1βxΞ±Β―(1βΞ²)Ο(x)=βx=0β(1βΞ²)xΞ(x+Ξ±+1)Ξ(x+1)=Ξ(Ξ±+1)F21(1,1+Ξ±;1;1βΞ²)=1Ξ(βΞ±)β«01xβΞ±β1(1βx)Ξ±x+Ξ²(1βx)dx=Ξ(1+Ξ±)Ξ²Ξ±+1.
Here, we applied the following identity from [4],
β«01xΞ±β1(1βx)Ξ²β1(ax+b(1βx))Ξ±+Ξ²dx=Ξ(Ξ±)Ξ(Ξ²)aΞ±bΞ²Ξ(Ξ±+Ξ²).
Then we obtain
βx=1βxΞ±Β―(1βΞ²)Ο(x)=Ξ(Ξ±+1)Ξ²Ξ±+1
and this complete the proof of (7.20). For the proof of (7.21), we have
E[X2]=βx=1βx2xΞ±β1Β―(1βΞ²)Ο(x)Ξ²Ξ±Ξ(Ξ±)=Ξ²Ξ±Ξ(Ξ±)βx=1β(1βΞ²)Ο(x)x(xΞ±Β―+(1βΞ±)xΞ±β1Β―)=Ξ²Ξ±Ξ(Ξ±)βx=1β(1βΞ²)Ο(x)xxΞ±Β―+(1βΞ±)(Ξ±(1βΞ²)Ξ²β1+1),
since xΞ±+1Β―=(x+Ξ±)xΞ±Β―, we have
βx=1βxxΞ±Β―(1βΞ²)Ο(x)=βx=1βxΞ±+1Β―(1βΞ²)Ο(x)βΞ±βx=1βxΞ±Β―(1βΞ²)Ο(x),
now by considering (7.23),
E[X2]=Ξ±(1+Ξ±)Ξ²β2βΞ±2Ξ²β1+(1βΞ±)(Ξ±(1βΞ²)Ξ²β1+1).
Also, with
MΟ(X)(t)=βx=1β(1+t)Ο(x)xΞ±β1Β―Ξ²Ξ±(1βΞ²)Ο(x)Ξ(Ξ±)=βx=1βxΞ±β1Β―Ξ²Ξ±((1+t)(1βΞ²))Ο(x)Ξ(Ξ±)=βx=1βxΞ±β1Β―Ξ²Ξ±((1β(Ξ²βt(1βΞ²)))Ο(x)Ξ(Ξ±)=(11βt(Ξ²β1β1))Ξ±,
the proof is complete.

Also, it is easily seen from (7.23) that

βx=1βxΞ±β1Β―Ξ²Ξ±(1βΞ²)Ο(x)Ξ(Ξ±)=1,βββββββββββΞ±>0,0<Ξ²<1.

β  Maximum Likelihood Estimation (MLE):

Let x1, x2, ..., xn be a random sample.If this sample are assumed to be independently and identically distributed (iid) random variables following Ξβ(Ξ±, Ξ²) distribution, then the log likelihood function of the sample will be

logL=nΞ±logΞ²+(βxiβn)log(1βΞ²)βnlogΞ(Ξ±)+βlogxiΞ±β1Β―.

## 8 Unification of the continuous and discrete gamma distributions

For a given time scale π, we present the construction of pdf of gamma distribution, such that, the density function on time scales is

fX(x)=h^Ξ±β1(x)Ξ²Ξ±e^Ξ²(Ξ·(x),0),ββββββββxβπ.
In order that, the reader sees how the pdf of continuous gamma distribution and delta and nabla discrete gamma distributions follow from (8.1), it is only at this point necessary to know that
h^Ξ±β1(x)=hΞ±β1(x)=xΞ±β1Ξ(Ξ±),ββββββββββΞ·(x)=xββββββββandβββββββββe^Ξ²(Ξ·(X),0)=eΞ²xβββββββifβββββββββπ=β+,
h^Ξ±β1(x)=hΞ±β1_(x)=xΞ±β1_Ξ(Ξ±),ββββββββββΞ·(x)=Ο(x)ββββββββandβββββββββe^Ξ²(Ξ·(X),0)=(1+Ξ²)Ο(x)
if π = βΞ±β1, Ξ± > 0. If π = β1 then we have
h^Ξ±β1(x)=hΞ±β1Β―(x)=xΞ±β1Β―Ξ(Ξ±),ββββββββββΞ·(x)=Ο(x)ββββββββandβββββββββe^Ξ²(Ξ·(X),0)=(1βΞ²)βΟ(x).

## 9 Application

Here, the data give the time to the death (in week) of AG positive leukemia patients (see [18] and [19]).

• {65, 156, 100, 134, 16, 108, 121, 4, 39, 143, 56, 26, 22, 1, 1, 5, 65}

Model MLE(s) Log^L AIC BIC
Ξβ(Ξ±, Ξ²) Ξ±^=2,Ξ²^=0.0330739 β93.8669 191.734 193.4
Zipf (ΞΈ, n = 500) ΞΈ^=0.1484037 β101.148 206.296 207.962
Zeta(Ξ³) Ξ³^=1.439751 β100.276 202.552 203.385
Table 1:

Results

The pmfs of Zeta and Zipf considered here for fitting are, respectively, given by

P(X=x)=1xΞ³βi=1β(1i)Ξ³,βββββββββββββx=1,2,β―
and
P(X=x)=1xΞΈβi=1n(1i)ΞΈ,βββββββββββββx=1,2,β―,n.
Let x1, x2, ..., xm be a random sample. If this sample are assumed to be independently and identically distributed (iid) random variables following Zeta or Zipf distributions, then the log likelihood function of the sample will be
LogL=βΞ³βi=1mlogβxiβmlogβi=1β(1i)Ξ³
or LogβL=βΞΈβi=1mlogβxiβmlogβi=1n(1i)ΞΈ, respectively.

## Acknowledgement

We would like to thank the referees for a careful reading of our paper and lot of valuable suggestions on the first draft of the manuscript.

## References

[7]BC Carlson, Special functions of applied mathematics, Academic Press [Harcourt Brace Jovanovich Publishers], New York, 1977.
[9]S Chakraborty, Generating discrete analogues of continuous probability distributions-A survey of methods and constructions, Journal of Statistical Distributions and Applications, Vol. 2, No. 6, 2015, pp. 1-30.
[10]M Ganji and F Gharari, The Generalized Random Variable Appears the Trace of Fractional Calculus in Statistics, Applied Mathematics Information Sciences Letters, Vol. 3, No. 2, 2015, pp. 61-67.
[11]M Holm, The theory of discrete fractional calculus development and application [dissertation], University of Nebraska, Lincoln, Neb, USA, 2011.
[19]DJ Hand, F Daly, AD Lunn, KJ McConway, and EO Ostrowski, A Hand Book of Small Data Sets, Chapman and Hall, London, 1994.
Journal
Journal of Statistical Theory and Applications
Volume-Issue
17 - 1
Pages
39 - 58
Publication Date
2018/03/31
ISSN (Online)
2214-1766
ISSN (Print)
1538-7887
DOI
10.2991/jsta.2018.17.1.4How to use a DOI?
Open Access
This is an open access article under the CC BY-NC license (http://creativecommons.org/licences/by-nc/4.0/).

TY  - JOUR
AU  - M. Ganji
AU  - F. Gharari
PY  - 2018
DA  - 2018/03/31
TI  - A New Method for Generating Discrete Analogues of Continuous Distributions
JO  - Journal of Statistical Theory and Applications
SP  - 39
EP  - 58
VL  - 17
IS  - 1
SN  - 2214-1766
UR  - https://doi.org/10.2991/jsta.2018.17.1.4
DO  - 10.2991/jsta.2018.17.1.4
ID  - Ganji2018
ER  -