# Divergence Measures Estimation and Its Asymptotic Normality Theory Using Wavelets Empirical Processes III

^{1}, Gane Samb Lo

^{2}

^{, *}, Diam Bâ

^{3}

^{1}Unité de Formation et de Recherche des Sciences Appliquées à la Technologie, Laboratoire d'Etudes et de Recherches en Statistiques et Développement, Gaston Berger University, Saint Louis, Sénégal

^{2}LERSTAD, Gaston Berger University, Saint-Louis, Senegal, Evanston Drive, NW, Calgary, Canada, T3P 0J9, Associate Researcher, LSTA, Pierre et Marie University, Paris, France, Associated Professor, African University of Sciences and Technology, Abuja, Nigeria

^{3}Unité de Formation et de Recherche des Sciences Appliquées à la Technologie, Laboratoire d'Etudes et de Recherches en Statistiques et Développement, Gaston Berger University, Saint Louis, Sénégal

^{*}Corresponding author. Email: gane-samb.lo@ugb.edu.sn

- DOI
- 10.2991/jsta.d.190514.002How to use a DOI?
- Keywords
- Divergence measures estimation
- Abstract
In the two previous papers of this series, the main results on the asymptotic behaviors of empirical divergence measures based on wavelets theory have been established and particularized for important families of divergence measures like Rényi and Tsallis families and for the Kullback-Leibler measures. While the proofs of the results in the second paper may be skipped, the proofs of those in paper 1 are to be thoroughly proved since they serve as a foundation to the whole structure of results. We prove them in this last paper of the series. We will also address the applicability of the results to usual distribution functions.

- Copyright
- © 2019 The Authors. Published by Atlantis Press SARL.
- Open Access
- This is an open access article distributed under the CC BY-NC 4.0 license (http://creativecommons.org/licenses/by-nc/4.0/).

## 1. INTRODUCTION AND RECALL OF THE RESULTS TO BE PROVED

For a general introduction, we refer the reader to the ten (10) first pages in [1] in which the notation and the assumption are exposed.

Let us recall here the main results we exposed in. The first is related to the empirical process based on wavelets.

### Theorem 1.1.

*Given the* *defined in Condition* (8) *such that* *and let* *defined as Formula* (13) *and* *defined as in Formula* (17). *Then, under* ** Assumptions** [1–3]

*, all in*[1]

*and for any bounded function*

*defined on*

*belonging to*

*we have*

### Proof.

Suppose that **Assumptions** 1 and 3, in [1], are satisfied and

We have

It comes that

To complete the proof, we have to show that (1)

For the first point, we show that

Let us denote

Such conditions are given in the Lindeberg-Feller-Levy conditions (See [4]), Point B, pp. 292).

We have to check that

To prove this, let us begin to see that

By Assumption 3 in [1], we have for any

Recall that

For all

We get that for all

Then for any

We have

Hence,

We have from (2)

Thus,

Besides, the

By the way, we have also

To prove

We get

By (6), we have

And then

Next

We have

Which proves

Now that Conditions

But we have

Using (7), we get

Finally, from (8), we obtain

This ends the first point.

As to the second point, we apply Theorem 9.3 in [3] to have

Therefore, we have

The two others main results are related to the asymptotics of class of the

### Theorem 1.2.

*Under* ** Assumptions** [1–3],

*C-*

*C-*

*C1-*

*C2-*

*and (BD) all in*[1],

*we have*

### Proof.

In the proofs, we will systematically use the mean values theorem. In the multivariate handling, we prefer to use the Taylor-Lagrange-Cauchy as stated in [5], page 230. The assumptions have already been set up to meet these two rules. To keep the notation simple, we introduce the two following notations:

Recall that

We start by showing that (10) holds.

We have

So by applying the mean value theorem to the function

Now we have

Therefore,

Under the **Boundedness Assumption** (6) in [1], we know that **condition** (19), [1], is satisfied, that is

This proves (10).

Formula (11) is obtained in a similar way. We only need to adapt the result concerning the first coordinate to the second.

The proof of (12) comes by splitting

We already know how to handle

By the Taylor-Lagrange-Cauchy (see [5], page 230), we have

From there, the combination of these remarks directs to the result.

The second main result concerns the asymptotic normality of the

### Theorem 1.3.

*Under* ** Assumptions** [1–3],

*C-*

*C-*

*C1-*

*C2-*

*and (BD) all in*[1],

*we have*

### Proof.

We start by proving (15). By going back to (14), we have

Now by theorem 1.1, one knows that

Let show that

From Theorem 3 in [2], we have

Finally

This ends the proof of (15).

The result (16) is obtained by a symmetry argument by swapping the role of

Now, it remains to prove Formula (17) of the theorem. Let us use bivariate Taylor-Lagrange-Cauchy formula to get,

We have

Thus we get

But we have

Using this independence, we have

Therefore, we have

Hence,

That leads to

It remains to prove that

As previously, we have

From there, the conclusion is immediate.

We finish the series by this section on the applicability of our results for usual *pdf*'s.

## 2. APPLICABILITY OF THE RESULTS FOR USUAL PROBABILITY LAWS

Here, we address the applicability of our results on usual distribution functions. We have seen that we need to avoid infinite and null values. For example, integrals in the Rényi's and the Tsallis family, we may encounter such problems as signaled in the first pages of paper [1]. To avoid them, we already suggested to used a modification of the considered divergence measure in the following way:

First of all, it does not make sense to compare two distributions of different supports. Comparing a *pdf* with support *pdf*'s we are comparing have the same support

Next, for each

And there exist two finite numbers

Besides, we choose the

Based on the remarks that the

So each application should begin by a quick look at the domain *pdfs* and the founding of the appropriate sub-domain

Assumption (20) also ensures that the *pdf*'s

Whenever the functions

## 3. CONCLUSION

In this last paper of this series, the main results have been proved. Wavelet theory has proved to be a good framework for processing estimates of divergence measures. We believe that having exactly the values of the scaling function will give better results in our work.

## ACKNOWLEDGMENTS

The three (1 &2 &3) authors acknowledges support from the World Bank Excellence Center (CEA-MITIC) that is continuously funding his research activities from starting 2014.

## REFERENCES

### Cite this article

TY - JOUR AU - Amadou Diadié Bâ AU - Gane Samb Lo AU - Diam Bâ PY - 2019 DA - 2019/05/23 TI - Divergence Measures Estimation and Its Asymptotic Normality Theory Using Wavelets Empirical Processes III JO - Journal of Statistical Theory and Applications SP - 113 EP - 122 VL - 18 IS - 2 SN - 2214-1766 UR - https://doi.org/10.2991/jsta.d.190514.002 DO - 10.2991/jsta.d.190514.002 ID - Bâ2019 ER -