Journal of Statistical Theory and Applications

Volume 17, Issue 1, March 2018, Pages 39 - 58

A New Method for Generating Discrete Analogues of Continuous Distributions

Authors
M. Ganjimganji@uma.ac.ir, F. Ghararif.gharari@uma.ac.ir
Department of statistics, University of Mohaghegh Ardabili, Ardabil, Iran.
Received 9 May 2017, Accepted 12 September 2017, Available Online 31 March 2018.
DOI
10.2991/jsta.2018.17.1.4How to use a DOI?
Keywords
Discrete fractional calulus; Time scale; Gamma distribution
Abstract

In this paper we use discrete fractional calculus for showing the existence of delta and nabla discrete distributions and then apply time scales for definition of delta and nabla discrete gamma distributions. The main result of this paper is unification of the continuous and discrete gamma distributions, which is at the same time a distribution to so-called time scale. Also, starting from the Laplace transform on time scales, we develop concept of moment generating function for these distributions.

Copyright
Copyright Β© 2018, the Authors. Published by Atlantis Press.
Open Access
This is an open access article under the CC BY-NC license (http://creativecommons.org/licences/by-nc/4.0/).

1 Introduction

One of the active areas of research in statistics is to model discrete life time data by developing discretized version of suitable continuous lifetime distributions. The discretization of a continuous distribution using different methods has attracted renewed attention of researchers in last few years, for example, see [3, 8, 12, 13, 14, 15, 16, 17]. Recently, these different methods are classified based on different criteria of discretization in detail by Chakraborty [9].

In this article, we present a new method for discretization of most of continuous distributions, where their probability density functions (pdfs) consist of the monomial Taylor and exponential function, and as an example we do discretization for gamma distribution with this method. Our discretization method, in comparison with prior methods for discretization of continuous distributions, has two main advantages. First, for a given continuous distribution, it is possible to generate two types (delta and nabla types) of corresponding discrete distributions. Second, the uni cation of the continuous distribution and corresponding discrete distributions, which is at the same time a distribution to the case of a time scale. We use discrete fractional calculus for showing the existence of delta and nabla discrete distributions and then apply time scales for definition of delta and nabla discrete distributions and as an unification theory under which continuous and discrete distributions are subsumed.

The article is organized as follows: The second section contains summary of some notations and definitions in delta and nabla calculus, also the definitions of delta and nabla Riemann right fractional sums and differences. In the third section, we use discrete fractional calculus for showing the existence of delta and nabla discrete distributions. In the fourth and fifth sections, we define novel types of moments for delta and nabla discrete distributions and provide a method for obtaining these moments using the Laplace transform on the discrete time scale. The sixth section contains an unification of ordinary moments and delta and nabla moments, also types of the mgfs. In section 7, delta and nabla discrete gamma distributions is defined. While section 8 contains an unification of discrete and continuous gamma distributions. In the final section an application of the proposed distribution is presented.

2 Preliminaries

In this section, we provide a collection of definitions and related results which are essential and will be used in the next discussions. As mentioned in [5, 6], the definitions and theorem are as following.

A time scale 𝕋 is an arbitrary nonempty closed subset of the real numbers ℝ. The most well-known examples are 𝕋 = ℝ and 𝕋 = β„€. The forward (backward) jump operator is defined by Οƒ(t) := inf{s ∈ 𝕋 : s > t} (ρ(t) := sup{s ∈ 𝕋 : s < t}), where infβˆ… := sup𝕋 and supβˆ… := inf𝕋. A point t ∈ 𝕋 is said to be right-dense if t < sup𝕋 and Οƒ(t) = t (left-dense if t > inf𝕋 and ρ(t) = t), right-scattered if Οƒ(t) > t (left-scattered if ρ(t) < t). The forward (backward) graininess function ΞΌ : 𝕋 β†’ [0, ∞) (Ξ½ : 𝕋 β†’ [0, ∞)) is defined by ΞΌ(t) := Οƒ(t) βˆ’ t (Ξ½(t) := t βˆ’ ρ(t)). More generally, we will denote all ρ(t), Οƒ(t) and t with Ξ·(t).

Definition 2.1.

A function f : 𝕋 β†’ ℝ is called regulated if its right-sided limits exist at all right-dense points in 𝕋 and its left-sided limits exist at all left-dense points in 𝕋.

Definition 2.2.

A function f : 𝕋 β†’ ℝ is called rd-continuous (ld-continuous) if it is continuous at right-dense (left-dense) points in 𝕋 and its left-sided (right-sided) limits exist at left-dense (right-dense) points in 𝕋.

The set 𝕋k (𝕋k*) is derived from the time scale 𝕋 as follows: If 𝕋 has a left-scattered maximum (right-scattered minimum) m, then 𝕋k := 𝕋 βˆ’ {m} (𝕋k*:=π•‹βˆ’{m}). Otherwise, 𝕋k := 𝕋 (𝕋k*:=𝕋).

Definition 2.3.

A function f : 𝕋 β†’ ℝ is said to be delta (nabla) differentiable at a point t ∈ 𝕋k (tβˆˆπ•‹k*) if there exists a number fβˆ†(t) (fβˆ‡(t)) with the property that given any Ο΅ > 0, there exists a neighborhood U of t such that

|f(Οƒ(t))βˆ’f(s)βˆ’fΞ”(t)(Οƒ(t)βˆ’s)|≀ɛ|Οƒ(t)βˆ’s|(|f(ρ(t))βˆ’f(s)βˆ’fβˆ‡(t)(ρ(t)βˆ’s)|≀ɛ|ρ(t)βˆ’s|)
for all s ∈ U.

For a function f : 𝕋 β†’ ℝ it is possible to introduce a derivative fβˆ†(t) (fβˆ‡(t)) and an integral ∫abf(t)Ξ”t (∫abf(t)βˆ‡t) in such a manner that fβˆ†(t) = fβ€² (fβˆ‡(t) = fβ€²) and ∫abf(t)Ξ”t=∫abf(t)dt (∫abf(t)βˆ‡t=∫abf(t)dt) in the case 𝕋 = ℝ and fβˆ†(t) = βˆ†f (fβˆ‡(t) = βˆ‡f) and ∫abf(t)Ξ”t=βˆ‘abβˆ’1f(t) (∫abf(t)βˆ‡t=βˆ‘a+1bf(t)) in the case 𝕋 = β„€, where, the forward and backward difference operators are defined by βˆ†f = f(t + 1) βˆ’ f(t) and = f(t) βˆ’ f(t βˆ’1), respectively. Also, we define the iterated operators βˆ†n = βˆ†(βˆ†nβˆ’1) and βˆ‡n = βˆ‡(βˆ‡nβˆ’1) for n ∈ β„•.

Definition 2.4.

A function p : 𝕋 β†’ ℝ is called μ–regressive (ν–regressive) provided 1 + ΞΌ(t)p(t) β‰  o (1 βˆ’ Ξ½(t)p(t) β‰  o) for all t ∈ 𝕋k (tβˆˆπ•‹k*).

The set ℝμ (ℝν) of all μ–regressive and rd-continuous (ν–regressive and ld-continuous) functions forms an Abelian group under the circle plus addition βŠ• defined by (p βŠ• q)(t) := p(t) + q(t) + ΞΌ(t)p(t)q(t) ((p βŠ• q)(t) := p(t) + q(t) βˆ’ Ξ½(t)p(t)q(t)) for all t ∈ 𝕋k (tβˆˆπ•‹k*). The additive inverse βŠ–p of p ∈ ℝμ (p ∈ ℝν) is defined by

(βŠ–p)(t):=βˆ’p(t)1+ΞΌ(t)p(t)               ((βŠ–p)(t):=βˆ’p(t)1βˆ’Ξ½(t)p(t))
for all t ∈ 𝕋k (tβˆˆπ•‹k*).

For real numbers a and b we denote β„•a = {a, a + 1, ...} and bβ„• = {b, b βˆ’ 1, ...}.

Theorem 2.5.

Let p ∈ ℝμ (p ∈ ℝν) and t0 ∈ 𝕋 be a fixed point. Then the delta (nabla) exponential function ep(., t0) (ep*(.,t0)) is the unique solution of the initial value problem

yΞ”=p(t)y,     y(t0)=1       (yβˆ‡=p(t)y,     y(t0)=1).

If 𝕋 = β„•a, when p(t) ≑ p, where p ∈ ℝμ (p ∈ ℝν = β„‚\{1}) and t0 = a, it is easy to see that ep(t, a) = (1+p)tβˆ’a (ep*(t,a)=(1βˆ’p)aβˆ’t) and if 𝕋 = ℝ, ep(t, a) = ep(tβˆ’a) (ep*(t,a)=ep(tβˆ’a)), where β€œe” is ordinary exponential function. Moreover, in the special case, e1(t, 0) = 2t (e12*(t,0)=2t). More generally, we will denote all ep(t, a), ep*(t,a) and ep(tβˆ’a) with Γͺp(t, a).

Definition 2.6.

The delta (nabla) Taylor monomials are the functions hn : 𝕋 Γ— 𝕋 β†’ ℝ, n ∈ β„•0, and are defined recursively as follows:

h0(t,s)=1,      hn+1(t,s)=∫sthn(Ο„,s)Δτ,β€‰β€‰β€‰β€‰β€‰β€‰β€‰β€‰β€‰β€‰βˆ€t,sβˆˆπ•‹
(h0*(t,s)=1,      hn+1*(t,s)=∫sthn*(Ο„,s)βˆ‡Ο„β€‰β€‰β€‰β€‰β€‰β€‰β€‰β€‰βˆ€t,sβˆˆπ•‹).

We consider three cases for the time scale 𝕋.

  1. (a)

    If 𝕋 = ℝ, then Οƒ(t) = ρ(t) = t and the Taylor monomials can be written explicitly as

    hn(t,s)=hn*(t,s)=(tβˆ’s)nn!,         t,sβˆˆβ„,      nβˆˆβ„•0.

    For each Ξ± ∈ ℝ \ {βˆ’β„•} define the Ξ±βˆ’th Taylor monomial to be

    hΞ±(t,s)=(tβˆ’s)Ξ±Ξ“(Ξ±+1),
    and Ξ“ denoted the special gamma function.

    In this paper, we only consider the special case, hΞ±(t):=hΞ±(t,0)=tΞ±Ξ“(Ξ±+1)as Taylor monomial (tm).

  2. (b)

    If 𝕋 = β„€, then Οƒ(t) = t + 1 and the Taylor monomials can be written explicitly as

    hn(t,s)=(tβˆ’s)n_n!,         t,sβˆˆβ„€,    nβˆˆβ„•0,
    where tn_=Ξ j=0nβˆ’1(tβˆ’j)=Ξ“(t+1)Ξ“(t+1βˆ’j) and product is zero when t + 1 βˆ’ j = 0 for some j. More generally, for Ξ± arbitrary define tΞ±_=Ξ“(t+1)Ξ“(t+1βˆ’Ξ±), where the convention of that division at pole yields zero. This generalized falling function allows us to extend (2.5) to define a general Taylor monomial that will serve us well in the probability distributions setting.

    For each Ξ± ∈ ℝ \ {βˆ’β„•}, define the delta α–th Taylor monomial to be

    hΞ±(t,s)=(tβˆ’s)Ξ±_Ξ“(Ξ±+1).

    In this paper, we only consider the special case hΞ±_(t):=hΞ±(t,0)=tΞ±_Ξ“(Ξ±+1) as delta Taylor monomial (dtm).

  3. (c)

    If 𝕋 = β„€, then ρ(t) = t βˆ’ 1 and the Taylor monomials can be written explicitly as

    hn(t,s)=(tβˆ’s)nΒ―n!,        t,sβˆˆβ„€,  nβˆˆβ„•0,
    where tnΒ―=Ξ j=0nβˆ’1(t+j)=Ξ“(t+n)Ξ“(t). More generally, for any real number Ξ± rising function is defined as tΞ±Β―=Ξ“(t+Ξ±)Ξ“(t) where t ∈ ℝ \ {βˆ’β„•0} and 0Ξ±Β―=0. This function allows us to extend (2.7) in order to define a general Taylor monomial that will serve us well in the probability distributions setting.

    For each Ξ± ∈ ℝ \ {βˆ’β„•} define the nabla α–th Taylor monomial to be

    hΞ±*(t,s)=(tβˆ’s)Ξ±Β―Ξ“(Ξ±+1).

    In this paper, we only consider the special case hΞ±Β―(t):=hΞ±*(t,0)=tΞ±Β―Ξ“(Ξ±+1) as nabla Taylor monomial (ntm).

    More generally, we will denote all hΞ±_(t), hΞ±Β―(t) and hΞ±(t) with Δ₯Ξ± (t).

Definition 2.7.

The delta (nabla) Laplace transform of a regulated function f : 𝕋a β†’ ℝ is given by

La{f}(s)=∫a∞eβŠ–s(Οƒ(t),a)f(t)Ξ”t       (La*{f}(s)=∫a∞eβŠ–s*(ρ(t),a)f(t)βˆ‡t),
for all s ∈ π’Ÿ{f}, where a ∈ ℝ is fixed, 𝕋a is an unbounded time scale with infimum a and π’Ÿ{f } is the set of all regressive complex constants for which the integral converges. In the special case, when 𝕋 = β„•, every function is regulated and its delta (nabla) discrete Laplace transform can be written as
La{f}(s)=βˆ‘t=a∞(11+s)Οƒ(t)f(t)          (La*{f}(s)=βˆ‘t=a∞(1βˆ’s)ρ(t)f(t)).

Let b be a real number and f : bβ„• β†’ ℝ. The delta Riemann left fractional sum of order Ξ± > 0 is defined by Abdeljawad [1] as

Ξ”βˆ’Ξ±f(t)=1Ξ“(Ξ±)βˆ‘s=t+Ξ±b(ρ(s)βˆ’t)Ξ±βˆ’1_f(s),            tβˆˆβ„•bβˆ’Ξ±.
We define the nabla Riemann right fractional sum of order Ξ± > 0 as
βˆ‡βˆ’Ξ±f(t)=1Ξ“(Ξ±)βˆ‘s=tbβˆ’1(Οƒ(s)βˆ’t)Ξ±βˆ’1Β―f(s),            tβˆˆβ„•bβˆ’1.
The delta Riemann right fractional difference of order Ξ± > 0 is defined by Abdeljawad [1] as
Δαf(t)=(βˆ’1)nβˆ‡nΞ”βˆ’(nβˆ’Ξ±)f(t),
for t ∈ bβˆ’(nβˆ’Ξ±)β„• and n = [Ξ±] + 1 where [Ξ±] is the greatest integer less than Ξ±. Also, the nabla Riemann right fractional difference of order Ξ± > 0 is defined by
βˆ‡Ξ±f(t)=(βˆ’1)nΞ”nβˆ‡βˆ’(nβˆ’Ξ±)f(t),
for t ∈ bβˆ’nβ„•.

In [2], author obtained the following alternative definition for delta Riemann right fractional difference

Δαf(t)=1Ξ“(βˆ’Ξ±)βˆ‘s=tβˆ’Ξ±b(ρ(s)βˆ’t)βˆ’Ξ±βˆ’1_f(s).
Similarly, we can prove the following formula for nabla Riemann right fractional difference
βˆ‡Ξ±f(t)=1Ξ“(βˆ’Ξ±)βˆ‘s=tbβˆ’1(Οƒ(s)βˆ’t)βˆ’Ξ±βˆ’1Β―f(s).
For an introduction to discrete fractional calculus the reader is referred to [11].

3 Generating discrete distributions by discrete fractional calculus

The following results show the relationship between continuous and discrete fractional calculus and statistics and also allows us to define different types of discrete distributions. Suppose that X is a positive continuous random variable. The expectation of the tm function, hΞ±βˆ’1(X), coincides with Riemann-Liouville right fractional integral of the pdf at the origin for Ξ± > 0 and Marchaud right fractional derivative of the pdf at the origin for βˆ’1 < Ξ± < 0, that is, we have

E[hΞ±βˆ’1(X)]={(Iβˆ’Ξ±f)(0),Ξ±>0(Dβˆ’Ξ±f)(0),βˆ’1<Ξ±<0
where
(Iβˆ’Ξ±f)(t)=1Ξ“(Ξ±)∫0∞xΞ±βˆ’1f(x+t)dx
is the Riemann-Liouville right fractional integral, while
(Dβˆ’Ξ±f)(t)=1Ξ“(βˆ’Ξ±)∫0∞xβˆ’Ξ±βˆ’1{f(x+t)βˆ’f(t)}dx
is the Marchaud left fractional derivative[10].

It can be seen that the limits of the above integrals equal to the support of random variable X. Considering this point, we present the following theorems for discrete random variable X.

Theorem 3.1.

Suppose that X is a discrete random variable. The expectation of the dtm function, hΞ±βˆ’1_(X), coincides with delta Riemann right fractional sum of the pmf at βˆ’1 for Ξ± > 0 and delta Riemann right fractional difference of the probability mass function (pmf) at βˆ’1 for Ξ± < 0, Ξ± βˆ‰ {βˆ’β„•}, i.e.

E[hΞ±βˆ’1_(X)]={(Ξ”βˆ’Ξ±f)(βˆ’1),Ξ±>0(Δαf)(βˆ’1),Ξ±<0,β€‰β€‰Ξ±βˆ‰{…,βˆ’2,βˆ’1}
where
(Ξ”βˆ’Ξ±f)(t)=βˆ‘x=Ξ±βˆ’1bβˆ’1βˆ’txΞ±βˆ’1_Ξ“(Ξ±)f(x+t+1)
is the delta Riemann right fractional sum, while
(Δαf)(t)=βˆ‘x=βˆ’Ξ±βˆ’1bβˆ’tβˆ’1xβˆ’Ξ±βˆ’1_Ξ“(βˆ’Ξ±)f(x+t+1)
is the delta Riemann right fractional difference.

Proof.

For Ξ± > 0, substitute x = ρ(s) βˆ’ t in the expression (2.10) and also for Ξ± < 0 and Ξ± βˆ‰ {βˆ’1, βˆ’2, ...}, in the expression (2.12).

Here, considering the limits of summation we can define the discrete distributions with the support β„•Ξ±βˆ’1 or a finite subset of it. In this case, we will call X, delta discrete random variable. As an example, we will define the delta discrete gamma distribution. Another example is the delta discrete uniform distribution, DU {Ξ± βˆ’ 1, Ξ±, ..., Ξ± + Ξ²}, where Ξ± ∈ ℝ and Ξ² ∈ β„•βˆ’1.

Theorem 3.2.

We suppose that X is a discrete random variable. The expectation of the ntm function, hΞ±βˆ’1Β―(X), coincides with nabla Riemann right fractional sum of the pmf at 1 for Ξ± > 0 and nabla Riemann right fractional difference of the pmf at 1 for Ξ± < 0, Ξ± βˆ‰ {βˆ’β„•}, i.e.

E[hΞ±βˆ’1Β―(X)]={(βˆ‡βˆ’Ξ±f)(1),Ξ±>0(βˆ‡Ξ±f)(1),Ξ±<0,β€‰β€‰Ξ±βˆ‰{…,βˆ’2,βˆ’1}
where
(βˆ‡βˆ’Ξ±f)(t)=βˆ‘x=1b+1βˆ’txΞ±βˆ’1Β―Ξ“(Ξ±)f(x+tβˆ’1)
is the nabla Riemann right fractional sum, while
(βˆ‡Ξ±f)(t)=βˆ‘x=1b+1βˆ’txβˆ’Ξ±βˆ’1Β―Ξ“(βˆ’Ξ±)f(x+tβˆ’1)
is the nabla Riemann right fractional difference.

Proof.

For Ξ± > 0, substitute x = Οƒ(s) βˆ’ t in the expression (2.11) and also for Ξ± < 0 and Ξ± βˆ‰ {βˆ’1, βˆ’2, ...}, in the expression (2.13).

Therefore, considering the limits of summation in recent theorem, we can define the discrete distributions with support β„•1 or a finite subset of it. In this case, we will call X, nabla discrete random variable. In this work, we will define the nabla discrete gamma distribution. Another example is the nabla discrete uniform distribution, DU {1, 2, ..., Ξ± βˆ’ Ξ² + 1}, where Ξ± ∈ ℝ and Ξ² ∈ Ξ±β„•.

4 Nabla moments and nabla moment generating function

In this section, we define novel types of moments for delta and nabla discrete distributions and provide a method for obtaining these moments using the Laplace transform on the discrete time scale.

It is well known that the Laplace transform of pdf is the moment generating function (mgf), which is defined as MX(βˆ’t)=E[eβˆ’tX]=∫0∞eβˆ’txf(x)dx, and X is a non- negative real-valued random variable and t is a complex variable with non-negative real part. On the other hand, it can be easily seen that MX(βˆ’t)=βˆ‘k=0∞E[(βˆ’X)k]tkk!. This function generates the moments of integer order of random variable X as ΞΌk=E[Xk]=(βˆ’1)kdkMX(βˆ’t)dtk|t=0.

Now, suppose that X is a delta discrete random variable with values x = β„•Ξ±βˆ’1, Ξ± > 0. The delta discrete Laplace transform of pmf of X is defined as

βˆ‘x=Ξ±βˆ’1∞(11+t)Οƒ(x)f(x)=E[(1+t)βˆ’Οƒ(x)]=E[eβŠ–t(Οƒ(X),0)]=MΟƒ(X)(βˆ’t),
and t is the set of all regressive complex constants for which the series converges. By using the series expansion for (1 + t)βˆ’x it can be easily proved that MΟƒ(X)(βˆ’t)=βˆ‘k=0∞E[(βˆ’1)k(Οƒ(X))kΒ―]tkk!. This function generates the nabla moments of integer order of X as E[(Οƒ(X))KΒ―]=(βˆ’1)kdkMΟƒ(X)(βˆ’t)dtk|t=0.

Definition 4.1.

Let X be a delta discrete random variable with pmf f.

  1. (a)

    Its k–th nabla moment, is denoted by ΞΌkβˆ‡ and is defined by ΞΌkβˆ‡:=βˆ‘x(Οƒ(X))kΒ―f(x).

  2. (b)

    The nabla mgf of X is given by Mσ(X)(t)=E[et*(σ(X),0)].

Theorem 4.2.

Let X be a delta discrete random variable with nabla moments ΞΌkβˆ‡. we have

MΟƒ(X)(t)=βˆ‘k=0∞E[(Οƒ(X))KΒ―]tkk!.
In particular,
E[(σ(X))K¯]=dkMσ(X)(t)dtk|t=0.

Proof.

For the proof of (4.1), we use the series expansion for function (1 βˆ’ t)βˆ’x, that is βˆ‘k=0∞xk_tkk!. we have

MΟƒ(X)(t)=βˆ‘xβˆ‘k=0∞(Οƒ(X))kΒ―tkk!f(x)=βˆ‘k=0∞(βˆ‘x(Οƒ(X))kΒ―f(x))tkk!.
For the proof (4.2), differentiate MΟƒ(X)(t) a total of k times. Since the only t– dependence in the summation is the (1 βˆ’ t)βˆ’Οƒ(x) factor, we have
dkdtkMΟƒ(X)(t)=βˆ‘x[dkdtk(1βˆ’t)βˆ’Οƒ(x)]f(x)=βˆ‘x(Οƒ(X))kΒ―(1βˆ’t)βˆ’(Οƒ(x)+k)f(x),
the claim now follows from taking t = 0 and recalling the definition of the nabla moments.β–‘

5 Delta moments and delta moment generating function

Suppose that X is a nabla discrete random variable with values x = β„•1. The nabla discrete Laplace transform of pmf of X is defined as

βˆ‘x=1∞(1βˆ’t)ρ(x)f(x)=E[(1βˆ’t)ρ(x)]=E[eβŠ–t*(ρ(X),0)]=Mρ(X)(βˆ’t)
and t is the set of all regressive complex constants for which the series converges. By using the series expansion for (1 βˆ’ t)x, it can be easily proved that Mρ(X)(βˆ’t)=βˆ‘k=0∞E[(βˆ’1)k(ρ(X))k_]tkk!. This function generates the delta moments of integer order of X as E[(ρ(X))K_]=(βˆ’1)kdkMρ(X)(βˆ’t)dtk|t=0.

Definition 5.1.

Let X be a nabla discrete random variable with pmf f.

  1. (a)

    Its k–th delta moment, denoted by ΞΌkΞ” and is defined by ΞΌkΞ”:=βˆ‘x(ρ(X))k_f(x).

  2. (b)

    The delta mgf of X is given by Mρ(X)(t) = E[et(ρ(X), 0)].

Theorem 5.2.

Let X be a nabla discrete random variable with delta moments ΞΌkΞ”. we have

Mρ(X)(t)=βˆ‘k=0∞E[(ρ(X))K_]tkk!.
In particular,
E[(ρ(X))K_]=dkMρ(X)(t)dtk|t=0.

Proof.

The proof is similar to the proof of theorem 4.2. We only outline this point that (1+t)x=βˆ‘k=0∞xk_tkk!

More generally, we denote all the ΞΌkΞ”, ΞΌkβˆ‡ and Β΅k with ΞΌ^k and also all the MΟƒ(X)(t), Mρ(X)(t) and MX(t) with MΞ·(X)(t), which are useful notations in statistics.

6 Unification of the mgf and moments

For a given time scale 𝕋, we present the construction of moments and the mgfs on time scales as

ΞΌ^k=E[Ξ“(k+1)h^k(Ξ·(X))]      and          MΞ·(X)(βˆ’t)=E[e^βŠ–t(Ξ·(X),0)],        xβˆˆπ•‹
, respectively. In order that, the reader sees how ordinary moments and delta and nabla moments, also types of the mgfs follow from (6.1), it is only at this point necessary to know that
h^k(x)=hk(x)=xkΞ“(k+1),        η(x)=Οƒ(x)=ρ(x)=x   and  e^βŠ–t(Ξ·(X),0)=eβˆ’tx,
if 𝕋 = ℝ+,
h^k(x)=hk_(x)=xk_Ξ“(k+1),         η(x)=Οƒ(x)      and        e^βŠ–t(Ξ·(X),0)=(1+t)βˆ’Οƒ(x),
if 𝕋 = β„•Ξ±βˆ’1, Ξ± > 0 and
h^k(x)=hkΒ―(x)=xkΒ―Ξ“(k+1),         η(x)=ρ(x)      and        e^βŠ–t(Ξ·(X),0)=(1βˆ’t)ρ(x),
if 𝕋 = β„•1.

7 The delta and nabla discrete gamma distributions

In this section, we will introduce delta and nabla discrete gamma distributions, by substituting continuous Taylor monomials and exponential functions with their corresponding discrete types (on the discrete time scale) in continuous gamma distribution.

7.1 The delta discrete gamma distribution

Definition 7.1. It is said that the random variable X has a delta discrete gamma distribution with (Ξ±, Ξ²) parameters if its pmf is given by

Pr[X=x]=hΞ±βˆ’1_(x)Ξ²Ξ±eΞ²(Οƒ(x),0)=xΞ±βˆ’1_Ξ²Ξ±Ξ“(Ξ±)(1+Ξ²)Οƒ(x),          x=β„•Ξ±βˆ’1,
where Ξ± > 0, Ξ² > 0 and it denotes as Ξ“βˆ†(Ξ±, Ξ²).

β–  Particular cases:

  1. (a)

    For Ξ± = 1, Ξ“βˆ†(Ξ±, Ξ²) in (7.1) reduces to a one parameter delta discrete gamma or delta exponential distribution, Ξ“βˆ†(1, Ξ²) ≑ Eβˆ†(Ξ²) with pmf

    Pr[X=x]=Ξ²(1+Ξ²)βˆ’Οƒ(x)=(Ξ²1+Ξ²)(11+Ξ²)x,         x=0,1,….

    Obviously, this is the pmf of geometric distribution (the number of failures for first success).

  2. (b)

    For Ξ± = n, n ∈ β„•, Ξ“βˆ†(Ξ±, Ξ²) in (7.1) is a delta discrete Erlang distribution Ξ“βˆ†(n, Ξ²) with pmf

    Pr[X=x]=(xβˆ’n+1x)Ξ²n(1+Ξ²)βˆ’Οƒ(x),        x=β„•nβˆ’1.

    If we substitute Οƒ(x) = x, (7.2) and (7.3) are given by

    Pr[X=x]=(Ξ²1+Ξ²)(11+Ξ²)xβˆ’1,       x=1,2,…,
    and
    Pr[X=x]=(xβˆ’nxβˆ’1)(Ξ²1+Ξ²)n(11+Ξ²)xβˆ’n,       x=n,n+1,…,
    respectively. It can be seen that (7.5) is the same negative binomial distribution (the number of independent trials required for n successes) and (7.4) is the same geometric distribution (the number of independent trials required for first success). Therefore, we call (7.3) the delta negative binomial distribution and its special case (7.2) is the delta geometric distribution. Then, the delta discrete exponential distribution is the same delta geometric distribution.

  3. (c)

    For Ξ±=n2, n ∈ β„•, and Ξ²=12, Ξ“βˆ†(Ξ±, Ξ²) in (7.1) is delta discrete Chi-square distribution, Ο‡2βˆ† with pmf

    Pr[X=x]=xn2βˆ’1Ξ“(n2)2n2(23)Οƒ(x),               x=n2βˆ’1,n2,….

    In the special case n = 2, we obtain delta discrete exponential distribution, i.e.

    Pr[X=x]=(12)(32)βˆ’Οƒ(x),            x=0,1,….

β–  Statistical properties:

Theorem 7.2.

If X ~ Ξ“βˆ†(Ξ±, Ξ²), then the expectation, variance and delta moment generating function of the random variable X are given by

E[X]=Ξ±(1+Ξ²)Ξ²βˆ’1βˆ’1,
Var(X)=Ξ±(1+Ξ²)Ξ²βˆ’2,
MΟƒ(X)(t)=(11βˆ’t(1+Ξ²)Ξ²βˆ’1)Ξ±.

Proof.

We have

E[X]=βˆ‘x=Ξ±βˆ’1∞xxΞ±βˆ’1_Ξ²Ξ±Ξ“(Ξ±)(1+Ξ²)Οƒ(x),
by use of the relation (xβˆ’Ξ±+1)xΞ±βˆ’1_=xΞ±_, we have
E[X]=Ξ²Ξ±Ξ“(Ξ±)βˆ‘x=Ξ±βˆ’1∞xΞ±_(1+Ξ²)Οƒ(x)+(Ξ±βˆ’1).
On the other hand, we have
βˆ‘x=Ξ±βˆ’1∞xΞ±_(1+Ξ²)Οƒ(x)=βˆ‘x=α∞(11+Ξ²)Οƒ(x)Ξ“(x+1)Ξ“(x+1βˆ’Ξ±)=(11+Ξ²)1+Ξ±βˆ‘x=0∞(11+Ξ²)xΞ“(x+Ξ±+1)Ξ“(x+1),
Now, we apply theorem 2.2.1 from [4] to the right-hand side and introduce the hyper-geometric function 2F1 in the following way
βˆ‘x=Ξ±βˆ’1∞xΞ±_(1+Ξ²)Οƒ(x)=(11+Ξ²)1+Ξ±Ξ“(1+Ξ±)F21(1,1+Ξ±;1;11+Ξ²),
by using exercise 4.2.10 from [7] to the right-hand side, we have
βˆ‘x=Ξ±βˆ’1∞xΞ±_(1+Ξ²)Οƒ(x)=(11+Ξ²)1+Ξ±1Ξ“(βˆ’Ξ±)∫01xΞ±(1βˆ’x)βˆ’Ξ±βˆ’1(1βˆ’x1+Ξ²)βˆ’1dx=(11+Ξ²)Ξ±1Ξ“(βˆ’Ξ±)∫01(1βˆ’x)Ξ±xβˆ’Ξ±βˆ’1x+Ξ²dx=(11+Ξ²)Ξ±1Ξ“(βˆ’Ξ±)B(1+Ξ±,βˆ’Ξ±)(1+Ξ²)Ξ±Ξ²βˆ’(Ξ±+1)=Ξ“(Ξ±+1)Ξ²Ξ±+1,
where B(., .) is the ordinary beta function. Then we have
βˆ‘x=Ξ±βˆ’1∞xΞ±_(1+Ξ²)Οƒ(x)=Ξ“(Ξ±+1)Ξ²Ξ±+1,
and which complete the proof of (7.8). For the proof of (7.9) we have
E[X2]=βˆ‘x=Ξ±βˆ’1∞x2xΞ±βˆ’1_Ξ²Ξ±Ξ“(Ξ±)(1+Ξ²)Οƒ(x)=Ξ²Ξ±Ξ“(Ξ±)βˆ‘x=Ξ±βˆ’1∞(11+Ξ²)Οƒ(x)x(xΞ±_+(Ξ±βˆ’1)xΞ±βˆ’1_)=Ξ²Ξ±Ξ“(Ξ±)βˆ‘x=Ξ±βˆ’1∞(11+Ξ²)Οƒ(x)xxΞ±_+(Ξ±βˆ’1)(Ξ±(1+Ξ²βˆ’1)βˆ’1),
since (1+x)xΞ±_=(1+x)Ξ±+1_, which implies that
βˆ‘x=Ξ±βˆ’1∞xxΞ±_(1+Ξ²)Οƒ(x)=βˆ‘x=Ξ±βˆ’1∞(1+x)1+Ξ±_(1+Ξ²)Οƒ(x)βˆ’βˆ‘x=Ξ±βˆ’1∞xΞ±_(1+Ξ²)Οƒ(x),
and by considering (7.11), we get
βˆ‘x=Ξ±βˆ’1∞(1+x)1+Ξ±_(1+Ξ²)Οƒ(x)=(1+Ξ²)Ξ“(Ξ±+2)Ξ²Ξ±+2,
and from which, we have
E[X2]=2Ξ±(Ξ±βˆ’1)Ξ²βˆ’1+Ξ±(Ξ±+1)Ξ²βˆ’2+(Ξ±βˆ’1)2.
Also, with
MΟƒ(X)(t)=βˆ‘x=Ξ±βˆ’1∞(11βˆ’t)Οƒ(x)xΞ±βˆ’1_Ξ²Ξ±Ξ“(Ξ±)(1+Ξ²)Οƒ(x)=βˆ‘x=Ξ±βˆ’1∞xΞ±βˆ’1_Ξ²Ξ±Ξ“(Ξ±)((1βˆ’t)(1+Ξ²))Οƒ(x)=βˆ‘x=Ξ±βˆ’1∞xΞ±βˆ’1_Ξ²Ξ±Ξ“(Ξ±)((1+(Ξ²βˆ’t(1+Ξ²)))Οƒ(x)=(11βˆ’t(1+Ξ²βˆ’1))Ξ±,
the proof is complete.

Also, it is easily seen from (7.11) that

βˆ‘x=Ξ±βˆ’1∞xΞ±βˆ’1_Ξ²Ξ±Ξ“(Ξ±)(1+Ξ²)Οƒ(x)=1,          α>0,  β>0.

β–  Maximum Likelihood Estimation (MLE):

Let x1, x2, ..., xn be a random sample. If this sample are assumed to be independently and identically distributed (iid) random variables following Ξ“βˆ†(Ξ±, Ξ²) distribution, then the likelihood function of the sample will be

L=Ξ²nΞ±Ξ xiΞ±βˆ’1_Ξ“n(Ξ±)(1+Ξ²)n+Ξ£xiI{Ξ±βˆ’1,Ξ±,…}(xi).

7.2 The nabla discrete gamma distribution

Definition 7.3.

It is said that the random variable X has a nabla discrete gamma distribution with (Ξ±, Ξ²) parameters if its pmf is given by

Pr[X=x]=hΞ±βˆ’1Β―(x)Ξ²Ξ±eΞ²*(ρ(x),0)=xΞ±βˆ’1Β―Ξ²Ξ±(1βˆ’Ξ²)ρ(x)Ξ“(Ξ±),           x=β„•1,
where Ξ± > 0, 0 < Ξ² < 1 and it denotes as Ξ“βˆ‡(Ξ±, Ξ²).

β–  Particular cases:

  1. (a)

    For Ξ± = 1, Ξ“βˆ‡(Ξ±, Ξ²) in (7.13) reduces to a one parameter nabla discrete gamma or nabla exponential distribution, Ξ“βˆ‡(1, Ξ²) ≑ Eβˆ‡(Ξ²) with pmf

    Pr[X=x]=Ξ²(1βˆ’Ξ²)ρ(x),           x=1,2,….
    Obviously, this is the pmf of geometric distribution (the number of independent trials required for first success).

  2. (b)

    For Ξ± = n, n ∈ β„•, Ξ“βˆ‡(Ξ±, Ξ²) in (7.13) is an nabla discrete Erlang distribution Ξ“βˆ‡(n, Ξ²) with pmf

    Pr[X=x]=(x+nβˆ’2xβˆ’1)Ξ²n(1βˆ’Ξ²)ρ(x),           x=β„•1.
    If we substitute ρ(x) = x, (7.14) and (7.15) are given by
    Pr[X=x]=Ξ²(1βˆ’Ξ²)x,        x=0,1,…,.
    and
    Pr[X=x]=(x+nβˆ’1x)Ξ²n(1βˆ’Ξ²)x,           x=0,1,…,
    respectively. It can be seen that (7.17) is the same negative binomial distribution (the number of failures for n successes) and (7.16) is the same geometric distribution (the number of failures for first success). Then, we call (7.15) the nabla negative binomial distribution and its special case (7.14) is the nabla geometric distribution. Therefore the nabla discrete exponential distribution is the same nabla geometric distribution.

  3. (c)

    For Ξ±=n2, n ∈ β„• and Ξ²=12, Ξ“βˆ‡(Ξ±, Ξ²) in (7.13) is nabla discrete Chi-square distribution, Ο‡2βˆ‡ with pmf

    Pr[X=x]=xn2βˆ’1Β―Ξ“(n2)2n2+ρ(x),            x=1,2,….
    In the special case n = 2, we obtain nabla discrete exponential distribution, i.e.
    Pr[X=x]=(12)x,         x=1,2,….

β–  Statistical properties:

Theorem 7.4.

If X ~ Ξ“βˆ‡(Ξ±, Ξ²), then the expectation, variance and nabla moment generating function of the random variable of X are given by

E[X]=Ξ±(1βˆ’Ξ²)Ξ²βˆ’1+1,
Var(X)=Ξ±(1βˆ’Ξ²)Ξ²βˆ’2,
Mρ(X)(t)=(11βˆ’t(1βˆ’Ξ²)Ξ²βˆ’1)Ξ±.

Proof.

We have

E[X]=βˆ‘x=1∞xxΞ±βˆ’1Β―Ξ²Ξ±(1βˆ’Ξ²)ρ(x)Ξ“(Ξ±),
since (x+Ξ±βˆ’1)xΞ±βˆ’1Β―=xΞ±Β―, it results
E[X]=Ξ²Ξ±Ξ“(Ξ±)βˆ‘x=1∞xΞ±Β―(1βˆ’Ξ²)ρ(x)+(1βˆ’Ξ±).
On the other hand, with similar method for the proof of theorem 7.2, we have
βˆ‘x=1∞xΞ±Β―(1βˆ’Ξ²)ρ(x)=βˆ‘x=0∞(1βˆ’Ξ²)xΞ“(x+Ξ±+1)Ξ“(x+1)=Ξ“(Ξ±+1)F21(1,1+Ξ±;1;1βˆ’Ξ²)=1Ξ“(βˆ’Ξ±)∫01xβˆ’Ξ±βˆ’1(1βˆ’x)Ξ±x+Ξ²(1βˆ’x)dx=Ξ“(1+Ξ±)Ξ²Ξ±+1.
Here, we applied the following identity from [4],
∫01xΞ±βˆ’1(1βˆ’x)Ξ²βˆ’1(ax+b(1βˆ’x))Ξ±+Ξ²dx=Ξ“(Ξ±)Ξ“(Ξ²)aΞ±bΞ²Ξ“(Ξ±+Ξ²).
Then we obtain
βˆ‘x=1∞xΞ±Β―(1βˆ’Ξ²)ρ(x)=Ξ“(Ξ±+1)Ξ²Ξ±+1
and this complete the proof of (7.20). For the proof of (7.21), we have
E[X2]=βˆ‘x=1∞x2xΞ±βˆ’1Β―(1βˆ’Ξ²)ρ(x)Ξ²Ξ±Ξ“(Ξ±)=Ξ²Ξ±Ξ“(Ξ±)βˆ‘x=1∞(1βˆ’Ξ²)ρ(x)x(xΞ±Β―+(1βˆ’Ξ±)xΞ±βˆ’1Β―)=Ξ²Ξ±Ξ“(Ξ±)βˆ‘x=1∞(1βˆ’Ξ²)ρ(x)xxΞ±Β―+(1βˆ’Ξ±)(Ξ±(1βˆ’Ξ²)Ξ²βˆ’1+1),
since xΞ±+1Β―=(x+Ξ±)xΞ±Β―, we have
βˆ‘x=1∞xxΞ±Β―(1βˆ’Ξ²)ρ(x)=βˆ‘x=1∞xΞ±+1Β―(1βˆ’Ξ²)ρ(x)βˆ’Ξ±βˆ‘x=1∞xΞ±Β―(1βˆ’Ξ²)ρ(x),
now by considering (7.23),
E[X2]=Ξ±(1+Ξ±)Ξ²βˆ’2βˆ’Ξ±2Ξ²βˆ’1+(1βˆ’Ξ±)(Ξ±(1βˆ’Ξ²)Ξ²βˆ’1+1).
Also, with
Mρ(X)(t)=βˆ‘x=1∞(1+t)ρ(x)xΞ±βˆ’1Β―Ξ²Ξ±(1βˆ’Ξ²)ρ(x)Ξ“(Ξ±)=βˆ‘x=1∞xΞ±βˆ’1Β―Ξ²Ξ±((1+t)(1βˆ’Ξ²))ρ(x)Ξ“(Ξ±)=βˆ‘x=1∞xΞ±βˆ’1Β―Ξ²Ξ±((1βˆ’(Ξ²βˆ’t(1βˆ’Ξ²)))ρ(x)Ξ“(Ξ±)=(11βˆ’t(Ξ²βˆ’1βˆ’1))Ξ±,
the proof is complete.

Also, it is easily seen from (7.23) that

βˆ‘x=1∞xΞ±βˆ’1Β―Ξ²Ξ±(1βˆ’Ξ²)ρ(x)Ξ“(Ξ±)=1,           α>0,0<Ξ²<1.

β–  Maximum Likelihood Estimation (MLE):

Let x1, x2, ..., xn be a random sample.If this sample are assumed to be independently and identically distributed (iid) random variables following Ξ“βˆ‡(Ξ±, Ξ²) distribution, then the log likelihood function of the sample will be

logL=nΞ±logΞ²+(βˆ‘xiβˆ’n)log(1βˆ’Ξ²)βˆ’nlogΞ“(Ξ±)+βˆ‘logxiΞ±βˆ’1Β―.

8 Unification of the continuous and discrete gamma distributions

For a given time scale 𝕋, we present the construction of pdf of gamma distribution, such that, the density function on time scales is

fX(x)=h^Ξ±βˆ’1(x)Ξ²Ξ±e^Ξ²(Ξ·(x),0),        xβˆˆπ•‹.
In order that, the reader sees how the pdf of continuous gamma distribution and delta and nabla discrete gamma distributions follow from (8.1), it is only at this point necessary to know that
h^Ξ±βˆ’1(x)=hΞ±βˆ’1(x)=xΞ±βˆ’1Ξ“(Ξ±),          η(x)=x        and         e^Ξ²(Ξ·(X),0)=eΞ²x       if         𝕋=ℝ+,
h^Ξ±βˆ’1(x)=hΞ±βˆ’1_(x)=xΞ±βˆ’1_Ξ“(Ξ±),          η(x)=Οƒ(x)        and         e^Ξ²(Ξ·(X),0)=(1+Ξ²)Οƒ(x)
if 𝕋 = β„•Ξ±βˆ’1, Ξ± > 0. If 𝕋 = β„•1 then we have
h^Ξ±βˆ’1(x)=hΞ±βˆ’1Β―(x)=xΞ±βˆ’1Β―Ξ“(Ξ±),          η(x)=ρ(x)        and         e^Ξ²(Ξ·(X),0)=(1βˆ’Ξ²)βˆ’Ο(x).

9 Application

Here, the data give the time to the death (in week) of AG positive leukemia patients (see [18] and [19]).

  • {65, 156, 100, 134, 16, 108, 121, 4, 39, 143, 56, 26, 22, 1, 1, 5, 65}

Model MLE(s) Log^L AIC BIC
Ξ“βˆ†(Ξ±, Ξ²) Ξ±^=2,Ξ²^=0.0330739 βˆ’93.8669 191.734 193.4
Zipf (ΞΈ, n = 500) ΞΈ^=0.1484037 βˆ’101.148 206.296 207.962
Zeta(Ξ³) Ξ³^=1.439751 βˆ’100.276 202.552 203.385
Table 1:

Results

The pmfs of Zeta and Zipf considered here for fitting are, respectively, given by

P(X=x)=1xΞ³βˆ‘i=1∞(1i)Ξ³,             x=1,2,β‹―
and
P(X=x)=1xΞΈβˆ‘i=1n(1i)ΞΈ,             x=1,2,β‹―,n.
Let x1, x2, ..., xm be a random sample. If this sample are assumed to be independently and identically distributed (iid) random variables following Zeta or Zipf distributions, then the log likelihood function of the sample will be
LogL=βˆ’Ξ³βˆ‘i=1mlog xiβˆ’mlogβˆ‘i=1∞(1i)Ξ³
or Log L=βˆ’ΞΈβˆ‘i=1mlog xiβˆ’mlogβˆ‘i=1n(1i)ΞΈ, respectively.

Acknowledgement

We would like to thank the referees for a careful reading of our paper and lot of valuable suggestions on the first draft of the manuscript.

Appendix

Figure 1:

Probability mass functions (Ξ“βˆ†) for various values of Ξ± and Ξ² = 0.5

Figure 2:

Probability mass functions (Ξ“βˆ†) for various values of Ξ² and Ξ± = 0.5

Figure 3:

Probability mass functions (Ξ“βˆ†) for various values of Ξ² and Ξ± = 2

Figure 4:

Probability mass functions (Ξ“βˆ†) for various values of Ξ± and Ξ² = 2

References

[7]BC Carlson, Special functions of applied mathematics, Academic Press [Harcourt Brace Jovanovich Publishers], New York, 1977.
[9]S Chakraborty, Generating discrete analogues of continuous probability distributions-A survey of methods and constructions, Journal of Statistical Distributions and Applications, Vol. 2, No. 6, 2015, pp. 1-30.
[10]M Ganji and F Gharari, The Generalized Random Variable Appears the Trace of Fractional Calculus in Statistics, Applied Mathematics Information Sciences Letters, Vol. 3, No. 2, 2015, pp. 61-67.
[11]M Holm, The theory of discrete fractional calculus development and application [dissertation], University of Nebraska, Lincoln, Neb, USA, 2011.
[19]DJ Hand, F Daly, AD Lunn, KJ McConway, and EO Ostrowski, A Hand Book of Small Data Sets, Chapman and Hall, London, 1994.
Journal
Journal of Statistical Theory and Applications
Volume-Issue
17 - 1
Pages
39 - 58
Publication Date
2018/03/31
ISSN (Online)
2214-1766
ISSN (Print)
1538-7887
DOI
10.2991/jsta.2018.17.1.4How to use a DOI?
Copyright
Copyright Β© 2018, the Authors. Published by Atlantis Press.
Open Access
This is an open access article under the CC BY-NC license (http://creativecommons.org/licences/by-nc/4.0/).

Cite this article

TY  - JOUR
AU  - M. Ganji
AU  - F. Gharari
PY  - 2018
DA  - 2018/03/31
TI  - A New Method for Generating Discrete Analogues of Continuous Distributions
JO  - Journal of Statistical Theory and Applications
SP  - 39
EP  - 58
VL  - 17
IS  - 1
SN  - 2214-1766
UR  - https://doi.org/10.2991/jsta.2018.17.1.4
DO  - 10.2991/jsta.2018.17.1.4
ID  - Ganji2018
ER  -