Divergence Measures Estimation and Its Asymptotic Normality Theory Using Wavelets Empirical Processes

In this paper we provide the asymptotic theory of the general of $\phi$-divergences measures, which includes the most common divergence measures : Renyi and Tsallis families and the Kullback-Leibler measure. Instead of using the Parzen nonparametric estimators of the probability density functions whose discrepancy is estimated, we use the wavelets approach and the geometry of Besov spaces. One-sided and two-sided statistical tests are derived as well as symmetrized estimators. Almost sure rates of convergence and asymptotic normality theorem are obtained in the general case, and next particularized for the Renyi and Tsallis families and for the Kullback-Leibler measure as well. The applicability of the results to usual distribution functions is addressed.


General Introduction.
In this paper, we deal with divergence measures estimation using essentially wavelets density function estimation. Let P be a class of probability measures on R d , d ≥ 1, a divergence measure on P is a function (1.1) D : such that D(Q, Q) = 0 for any Q such that (Q, Q) in the domain of application of D.
The function D is not necessarily an application. And if it is, it is not always symmetrical and it does neither have to be a metric. In case of lack of symmetry, the following more general notation is more appropriate : (1.2) D : P 1 × P 2 −→ R (Q, L) −→ D(Q, L), 1 where P 1 and P 2 are two families of probability measures on R d , not necessarily the same. To better explain our concern, let us introduce some of the most celebrated divergence measures.
A great number of them are based on probability density functions (pdf ). So let us suppose that any Q ∈ P admits a pdf f Q with respect to a σ-finite measure ν on (R d , B(R d )), which is usually the Lebesgue measure λ k (with λ 1 = λ) or a counting measure on R d .
We may present the following divergence measures.
(1) The L 2 2 -divergence measure : (2) The family of Renyi's divergence measures indexed by α = 1, α > 0, known under the name of Renyi-α : (3) The family of Tsallis divergence measures indexed by α = 1, α > 0, also known under the name of Tsallis-α : (4) The Kullback-Leibler divergence measure (1.6 The latter, the Kullback-Leibler measure, may be interpreted as a limit case of both the Renyi's family and the Tsallis' one by letting α → 1. As well, for α near 1, the Tsallis family may be seen as derived from D R,α (Q, L) based on the first order expansion of the logarithm function in the neighborhood of the unity.
From this small sample of divergence measures, we may give the following remarks.
(a) The L 2 2 -divergence measure is both an application and a metric on P 2 , where P is the class of probability measures on R d such that (b) For both the Renyi and the Tsallis families, we may have integrability problems and lack of symmetry. For d = 1, it is clear from the very form of these divergence measures that we do not have symmetry, unless for the special case where α = 1/2. Next, consider two real random variables X and Y following gamma laws with respective shape parameters (a, b) ∈]0, +∞[ 2 and (c, d) ∈]0, +∞[ 2 . Here naturally, we use pdf 's with respect to the Lebesgue measure λ on R. Both families are build on the following functional which amounts, in this case, to This quantity is finite if and only if αa + (1 − α)c ≥ 0 and αb + (1 − α)d ≥ 0.
From this sample tour, we have to be cautious, when speaking about divergence measures as applications and/or metrics. In the most general case, we have to consider the divergence measure between two specific probability measures as a number or a real parameter.
Originally, divergence measures came as extensions and developments of information theory that was first set for discrete probability measures. In such a situation, the boundedness of these discrete probability measures above zero and below +∞ was guaranteed. That is, the following assumption holds : Boundedness Assumption (BD). There exist two finite numbers 0 < κ 1 < κ 2 < +∞ such that If Assumption (1.7) holds, we do not have to worry about integrability problems, especially for Tsallis, Renyi and Kullback-Leibler measures, in the computations arising in the estimation theories. But, in the generalized context where are used arbitrary density functions with respect to some measure ν, such an assumption is not that automatic. This explains why Assumption (1.7) is systematically used in a great number of works in that topic, for example, in Singh and Poczos (2014), Krishnamurthy et al. (2014), Hall (1987, to cite a few. To ensure that Assumption (1.7) is fulfilled, it may be instrumental to restrict the computation of the integral used in the divergence measure to a compact domain D such that and next, to appeal to the : Modified Boundedness Condition : There exist 0 < κ 1 < κ 2 < +∞ and a compact domain D as large as possible such that This implies that the modified divergence measure, denoted by D (m) , is applied to the modified pdf 's : Based of this technique, that we apply in case of integrability problems, we will suppose, when appropriate, that Assumption (1.7) holds on a compact set D.
Although we are focusing on the aforementioned divergence measures in this paper, it is worth mentioning that there exist quite a few number of them. Let us cite for example the ones named after : Ali-Silvey or f -divergence Topsoe (2000), Cauchy-Schwarz, Jeffrey divergence (see Evren (2012)), Chernoff (See Evren (2012)) , Jensen-Shannon (See Evren (2012)). According to Cichocki and Amari (2010), there is more than a dozen of different divergence measures in the literature.
Before coming back to our divergence measures estimation of interest, we want to highlight some important applications of them. Indeed, divergence has proven to be useful in applications. Let us cite a few of them : (a) They heavily intervene in Information Theory and recently in Machine Learning.
(b) They be used as similarity measures in image registration or multimedia classification (see Moreno et al. (2004)).
(c) They are also used as loss functions in evaluating and optimizing the performance of density estimation methods (see Hall (1987)).
(d) Divergence estimates can also be used to determine sample sizes required to achieve given performance levels in hypothesis testing.
(e) There has been a growing interest in applying divergence to various fields of science and engineering for the purpose of estimation, classification, etc. (See Bhattacharya (1967), Liu and Shum (2003)).
(f) Divergence also plays a central role in the frame of large deviations results including the asymptotic rate of decrease of error probability in binary hypothesis testing problems.
(g) The estimation of divergence between the samples drawn from unknown distributions gauges the distance between those distributions. Divergence estimates can then be used in clustering and in particular for deciding whether the samples come from the same distribution by comparing the estimate to a threshold.
(h) Divergence gauges how differently two random variables are distributed and it provides a useful measure of discrepancy between distributions. In the frame of information theory , the key role of divergence is well known.
In the next subsection, we describe the frame in which we place the estimation problems we deal in this paper.

Statistical Estimation.
The divergence measures may be applied to two statistical problems among others.
(A) First, it may be used as a fitting problem as described here. Let X 1 , X 2 , .... a sample from X with an unknown probability distribution P X and we want to test the hypothesis that P X is equal to a known and fixed probability P 0 . Theoretically, we can answer this question by estimating a divergence measure D(P X , P 0 ) by a plug-in estimator D(P (n) X , P 0 ) where, for each n ≥ 1, P X is replaced by an estimator P (n) X of the probability law, which is based on sample X 1 , X 2 , ..., X n , to be precised.
From there establishing an asymptotic theory of ∆ n = D(P (n) X , P 0 ) − D(P X , P 0 ) is thought to be necessary to conclude.
(B) Next, it may be used as tool of comparing for two distributions. We may have two samples and wonder whether they come from the same probability measure. Here, we also may two different cases.
(B1) In the first, we have two independent samples X 1 , X 2 , .... and Y 1 , Y 2 , .... respectively from a random variable X and Y. Here the estimated divergence D(P (n) X , P (m) Y ), where n and m are the sizes of the available samples, is the natural estimator of D(P X , P Y ) on which depends the statistical test of the hypothesis : P X = P Y .
(B2) But the data may also be paired (X, Y ), (X 1 , Y 2 ), (X 2 , Y 2 ), ..., that is X i and Y i are measurements of the same case i = 1, 2, ... In such a situation, testing the equality of the margins P X = P Y should be based on an estimator P (n) (X,Y ) of the joint probability law of the couple (X, Y ) based of the paired observations (X i , Y i ), i = 1, 2, . . . , n.
We did not encounter the approach (B2) in the literature. In the (B1) approach, almost all the papers used the same sample size, at the exception of Poczos and Jeff (2011), for the double-size estimation problem. In our view, the study case should rely on the available data so that using the same sample size may lead to a loss of information. To apply their method, one should take the minimum of the two sizes and then loose information. We suggest to come back to a general case and then study the asymptotic theory of D(P (n) Y ) based on samples X 1 , X 2 , .., X n . and Y 1 , Y 2 , ..., Y m . In this paper, we will systematically use arbitrary samples sizes.
In the context of the situation (B1), there are several papers dealing with the estimation of the divergence measures. As we are concerned in this paper by the weak laws of the estimators, our review on that problematic did return only of a few results. Instead, the literature presented us many kinds of results on almost-sure efficiency of the estimation, with rates of convergences and laws of the iterated logarithm, L p (p = 1, 2) convergences, etc. To be precise, Dhakher et al. (2016) used recent techniques based on functional empirical process to provide a series of interesting rates of convergence of the estimators in the case of one-sided approach for the class de Renyi, Tsallis, Kullback-Leibler to cite a few. Unfortunately, the authors did not address the problem of integrability, taking text r=for granted that the divergence measures are finite. Although the results should be correct under the boundedness assumption BD we described earlier, a new formulation in that frame would be welcome.
The paper of Krishnamurthy et al. (2015) is exactly what we want to, except that is is concentrated of the L 2 -divergence measure and used the Parzen approach. Instead, we will handle the most general case of φ-divergence measure and will use the wavelets probability density estimators.
In the context of the situation (B1), we may cite first the works of Krishnamurthy et al. (2014) and Singh and Poczos (2014). They both used divergence measures based on probability density functions and concentrated on Renyi-α, Tsallis-α and Kullback-Leibler. In the description of the results below, the estimated pfd 's -f and g -are usually in a periodic Hőlder class of a known smoothness s.. , Krishnamurthy et al. (2014) defined Renyi and Tsallis estimators by correcting the plug-in estimator and established that, as long as D R,α (f, g) ≥ c and D T,α (f, g) ≥ c, for some constant c > 0, then There has been a recent interest in deriving convergence rates for divergence estimators (Moon and Hero (2014), Krishnamurthy et al. (2014)). The rates are typically derived in terms of smoothness s of the densities :

Specifically
The estimator of Liu et al. (2012) converges at rate n − s s+d , achieving the parametric rate when s > d.
Similarly, Sricharan et al. (2012) showed that when s > d a k-nearest-neighbor style estimator achieves the rate n −2/d (in absolute error) ignoring logarithmic factors. In a follow up work, the authors improved this result to O(n −1/2 ) by using an set of weak estimators, but they required s > d orders of smoothness. Singh and Poczos (2014) provided an estimator for Rényi−α divergences as well as general density functionals that uses a mirror image kernel density estimator. They obtained exponential inequalities for the deviation of the estimators from the true value. Kallberg and Seleznjev (2012) studied an ε−nearest neighbor estimator for the L 2 −divergence that enjoys the same rate of convergence as the projection-based estimator of Krishnamurthy et al. (2014).
The majority of the aforementioned articles worked with densities in Hőlder classes, whereas our work applies for densities in the Besov classes.
Here, we will focus on divergence measures between absolutely continuous probability laws with respect to the Lebesgue measures. As well, our results applied to the approaches (A) and (B1) defined above. As a sequence, we estimate divergence measures by their plug-in counterparts, meaning that we replace the probability density functions (pdf ) in the expression of the divergence measure by a nonparametric estimators of the pdf 's. From now, we have on our probability space, two independent sequences : (-) a sequence of independent and identically distributed random variables with common pdf f PX : (-) a sequence of independent and identically distributed random variables with common pdf g PY : To make the notations more simple, we write We focus on using pdf 's estimates provided by the wavelets approach. We will deal on the Parzen approach in a forthcoming study. So, we need to explain the frame in which we are going to express our results.
We also wish to get, first, general laws for an arbitrary functional of the form where φ(x, y) is a measurable function of (x, y) ∈ R 2 + on which we will make the appropriate conditions. The results on the functional J(f, g), which is also known under the name of φ-divergence, will lead to those on the particular cases of the Renyi, Tsallis, and Kullback-Leibler measures.
The rest of the paper is organized as follows. In the remainder part of this section, we describe the wavelets density estimators we will use alongside basic notation and assumptions. In Section 2, we will give our full results for the functional J(f, g) both in one-sided and two-sided approaches. In Section 3, we will particularize the results for specific measures we already described. The proofs are postponed in Section 4. Technical remarks are gathered in the Appendix Section 5.

Wavelets estimation of pdf 's.
To begin with the wavelets theory and its statistical applications, we say that the wavelets setting involves two functions ϕ and ψ in L 2 (R) respectively called farther and mother such that is a orthonormal basis of L 2 (R). We adopt the following notation, for j ≥ 0, k ∈ Z : ϕj, k = 2 j/2 ϕ(2 j (.) − k) and ψ j,k = 2 j/2 ψ(2 j (.) − k).
Thus, any function f in L 2 (R) is characterized by its coordinates in the orthonormal basis, in the form For an easy introduction to the wavelets theory and to its applications to statistics, see for instance Hardle et al. (1998), Daubechies (1992), Blatter (1998), etc. In this paper we only mention the unavoidable elements of this frame.
Based on the orthonormal basis defined below, the following Kernel function is introduced For any j ≥ 1 fixed, called a resolution level, we define and for measurable function h, we define the operator projection Therefore we can write, for all x ∈ R, In the frame of this wavelets theory, for each n ≥ 1, we fix the resolution level depending on n and denoted by j = j n , and we use the following estimator of the pdf f associated to X, based on the sample of size n from X, as defined in (1.9), As well, in a two samples problem, we will estimate the pdf g associated to Y , based of a sample of size n from Y , as defined in (1.10), by The aforementioned estimator is known under the name linear wavelets estimators.
Before we give the main assumptions on the wavelets we are working, we have to define the concept of weak differentiation. Denote by D(R) the class of functions from R to R with compact support and infinitely differentiable. A function f : R → R is weak differentiable if and only if there exists a function g : R → R locally integrable (on compact sets) such that, for any φ ∈ D(R), we have In such a case, g is called the weak derivative function of f and denoted f [1] . If the first weak derivative has itself a weak derivative, ans so forth up to the p − 1-th derivative, we get the p-th derivative function f [p] . Now we may expose the four assumptions we require on the wavelets. Assumption 1. . The wavelets ϕ and ψ are bounded and have compact support and either (i) the father wavelet ϕ has weak derivatives up to order T in L p (R) (1 ≤ p ≤ ∞ ) or (ii) the mother wavelet ψ associated to ϕ satisfies x m ψ(x)dx = 0 for all m = 0, . . . , T. and Wavelets generators with compact supports are available in the literature. We may cite those named after Daubechies, Coiflets and Symmlet (See Hardle et al. (1998)). The cited generators fulfill our two main assumption.
Under Assumption 2, the summation over k, in (1.13), is finite since only a number of the terms in the summation are non zeros (see Giné and Nickl (2009)).
The third assumption concerns the resolution level we choose. We set for once an increasing sequence (j n ) n≥1 such that Assumption 3. There exists a non-negative symmetrical and continuous function Φ(t) of t ∈ R with a compact support K such that : Assumption 4. lim n→+∞ n −1/4 2 jn = 1.
By the way, we have as n → ∞, and These conditions allow the use the Giné and Nickl (2009)'s results.
We also denote where h ∞ stands for sup x∈D(h) |h(x)|, and D(h) is the domain of application of h.
In the sequel we suppose the densities f and g belong to the Besov space B t ∞,∞ (R). We will say a word of simple conditions under which our pdf 's do belong to such spaces.
Suppose that the densities f and g belong to B t ∞,∞ (R), that ϕ satisfies Assumption 2, and ϕ, ψ satisfy Assumption 1. Then Theorem 3 Giné and Nickl (2009) implies that the rates of convergence a n , b n and c n are of the form O 1 4 log 2 log n n 3/4 + n −t/4 almost-surely and converge all to zero at this rate (with 0 < t < T ).
In order to establish the asymptotic normality of the divergences estimators, we need this key tool concerning the wavelets empirical process denoted by G w n,X (h), where h ∈ B t ∞,∞ (R) and defined as follows by dx denotes the expectation of the measurable function h with respect to the probability distribution function P X . The superscript w refers to wavelets. We have We are ready to give our results on the functional J introduced in Formula (1.11).

Main Results.
Here, we present a general asymptotic theory of a class of divergence measures estimators including the Renyi and Tsallis families and the Kullback-Leibler ones.
Actually, we gather them in the φ-divergence measure form. We will obtain a general frame from which we will derive a number of corollaries. The assumption (1.7) will be used in the particular cases to ensure the finiteness of the divergence measure as mentioned in the beginning of the article. However, in the general results, the assumption (1.7) is part of the general conditions.
We begin to state a result as a general tool for establishing asymptotic normality and related to the wavelets empirical process, which we will use for establishing the asymptotic normality of divergence measures.
Theorem 1. Given the (X n ) n≥1 , defined in (1.9) such that f ∈ B t ∞,∞ (R) and let f n defined as (1.14) and G w n,X defined as in (1.18 ) as n → ∞. Based on that result which will be proved later, we are going to state all results of the functional J defined in Formula 1.11, regarding its almost-sure and Gaussian asymptotic behavior. Let us begin by some notations. Let us assume that φ have continuous second order partial derivatives defined as follows : 1,2 (s, t) = φ 2,1 (s, t) = Define the functions h i , i = 1, . . . 4 : We require the following general conditions. C-A. All the constants A i are finite.
C-h. All the functions h i used in the theorem below are bounded and lie in a Besov space B t ∞∞ for some t such that t > 1/2.
Here are our main results.
I -Statements of the main results.
The first concerns the almost sure efficiency of the estimators.
The second concerns the asymptotic normality of the estimators.

II -Direct extensions.
Quite a few number of divergence measures are not symmetrical. Among these nonsymmetrical measures are some of the most interesting ones. For such measures, estimators of the form J(f n , g), J(f, g n ) and J(f n , g n ) are not equal to J(g, f n ), J(g n , f ) and J(g n , f n ) respectively.
In one-sided tests, we have to decide whether the hypothesis f = g, for g known and fixed, is true based on data from f . In such a case, we may use the statistics one of the statistics J(f n , g) and J(g, f n ) to perform the tests. We may have information that allows us to prefer one of them over the other. If not, it is better to use both of them, upon the finiteness of both J(f, g) and J(g, f ), in a symmetrized form as The same situation applies when we face double-side tests, i.e., testing f = g from data generated from f and from g.

Theorem 5. Under the assumptions 1-3, C-A, C-h, C1-φ, C2-φ and (BD), we obtain
(2.14) n Remark The proof of these extensions will not be given here, since they are straight consequences of the main results. As well, such considerations will not be made again for particular measures for the same reason.
We are going to give special forms of these mains results in a number of corollaries.
To handle the Renyi and the Tsallis families, we get general results on the functional which is used by these families. In turn the treatment of both of them are derived from the I functional using the delta method. For all these particular cases, we do not give their proofs since the derive from the general cases by straightforward computations.

Particular cases
A -Renyi and Tsallis families.
These two families are expressed through the functional which of the form of the φ-divergence measure with

Corollary 3. Let Assumptions 1-3 hold and let (BDE) be satisfied. Then for any
and lim sup n→+∞, m→+∞ We also have
The treatment of the asymptotic behaviour of of the Renyi-α, α > 0, α = 1, is obtained from Part (A) by expansions and by the application of the delta method. We first remark that We have the following results
As to the symmetrized form we need the supplementary notations: We have Corollary 15. Let Assumptions 1-3 hold, and let (BDE) be satisfied. Then, DL,2 (f, g)).
We will begin by the proof of Theorem 1.
A -Proof of Theorem 1.
Suppose that Assumptions 1 and 2 are satisfied and h ∈ B t ∞,∞ (R).
We have It comes that To complete the proof, we have to show that : (1) √ n(P n,X − E X )(K jn (h)) converges in distribution to a centered normal distribution and (2) √ nR 1,n converges to zero in probability, as n → ∞. By the way, we will assume that, in the sequel, all the limits as meant as n → ∞, unless the contrary is specified.
For the first point, we apply the central theorem for independent random variables. We have to check the Lindeberg-Feller-Levy conditions (See Loève, (1972), Point B, pp. 292). Let us denote Z i,n = K jn (h)(X i ) and σ 2 i,n = Var(Z i,n ), 1 ≤ i ≤ n and next s 2 n = σ 2 1,n + . . . + σ 2 n,n , n ≥ 1. We have to check that (L1) s −1 n max{σ i,n , 1 ≤ i ≤ n} → 0 and for any fixed ε > 0, To prove this, let us begin to see that for any x ∈ D By a change of variables and by Assumption 3, we have for any x ∈ D, Denote by C a bound of the compact set K which supports Φ and c = Φ ∞ λ(K).
Since h is continuous on the compact set D, it is uniformly continuous and we have which, for all p ≥ 1, for all n ≥ 1 and for all x ∈ D, leads to We get that for all p ≥ 1, for any 1 ≤ n. We get some consequences. First, we have Then for any 1 ≤ i ≤ n, Hence, the two last formulas yield, Besides, the c 2 -inequality gives By applying this c 2 -inequality to the two terms in the right-hand in Formula (4.6) based on Formulas (4.2) and (4.3), and by denoting Z = 2(h(X) 2 + (E(h(X)) 2 and δ n = 2c(2 + ∞ )ρ(h, n), we have provided that n is large enough to ensure that cρ(h, n) ≤ 1. By the way, we also have Z + delta n ≤ 6 ∞ + δ n = ∆ n → 6 ∞ .
As to the second point, we apply Theorem 9.3 in Hardle et al. (1998) to have for any 1/2 < t < T .

B -Proof of Theorem 2.
In the proofs, we will systematically use the mean values theorem. In the multivariate handling, we prefer to use the Taylor-Lagrange-Cauchy as stated in Valiron (1966), page 230. The assumptions have already been set up to meet these two rules. To keep the notation simple, we introduce the two following notations : a n = ∆ n f ∞ and b n = ∆ n g ∞ .
Recall that We start by showing that 2.4 holds.
So by applying the mean value theorem to the function u 1 (x) → φ(u 1 (x), g(x)), we have φ(f n (x), g(x)) = φ(f (x), g(x)) (4.8) where θ 1 (x) is some number lying between 0 and 1. In the sequel, any θ i satisfies |θ i | < 1 By applying again the mean values theorem to the function u 2 (x) → φ where θ 2 (x) is some number lying between 0 and 1. We can write (4.8) as Under Assumption 1.7, we know that A 1 < ∞ and that condition (2.1) is satisfied, that is This proves (2.4).
Formula (2.5) is obtained in a similar way. We only need to adapt the result concerning the first coordinate to the second.
The proof of (2.6) comes by splitting D (φ(f n (x), g m (x)) − φ(f (x), g(x))) dx, into the following two terms We already know how to handle I n,2 . As to I n,1 , we may still use the Taylor-Lagrange-Cauchy formula since we have By the Taylor-Lagrange-Cauchy (see Valiron (1966), page 230), we have (1)).
From there, the combination of these remarks direct to the result.
The result (2.8) is obtained by a symmetry argument by swapping the role of f and g. Now, it remains to prove Formula (2.9) of the theorem. Let us use bi-variate Taylor-Lagrange-Cauchy formula to get, We have (u n (x), v n (y)) = (f (x) + θ∆ n f (x), g(x) + θ∆ n g(x).

Thus we get
where R n,m is given by (1), where N n (i) ∼ N (0, Var(h i (X))) , i = 1, 2 and N n (1) and N n (2) are independent.
From there, the conclusion is immediate.

Annexe
Here, we address the applicability our results on usual distribution functions. We have seen that we need to avoid infinite and null values. For example, integrals in the Renyi's of the Tsallis family, we may encounter such problems as signaled in the first pages of this paper. To avoid them, we already suggested to used a modification of the considered divergence measure in the following way.
First of all, it does not make sense to compare two distributions of different supports. Comparing a pdf with support R, like the Gaussian one, with another with support [0, 1], like the standard uniform one, is meaningless. So, we suppose that the pdf 's we are comparing have the same support D.
Next, for each ε > 0, we find a domain D ε included in the common support D of f and g such that and there exist two finite numbers κ 1 > 0 and κ 2 > 0, such that we have Besides, we choose the D ǫ 's increasing to D as ǫ decreases to zero. We define the modified divergence measure (5.3) D (ε) (f, g) = D(f 1 Dε , g1 Dε ).
We may denote f ε = f 1 Dε and g ε = g1 Dε . Based on the remarks that the D ǫ 's increasing to D as ǫ decreases to zero and that the equality between f and g implies that of f ε and g ε , we recommend to replace the exact test of f = g by the approximated test f ε = g ε , for ε as small as possible.
So each application should begin by a quick look at the domain D of the two pdf and the founding of the appropriate sub-domain D ε on which are applied the tests. Assumption (5.2) also ensures that the pdf 's f ε and g ε lie in B t ∞∞ for almost all the usual laws. Actually, according to Hardle et al. (1998) where [t] stands for the integer part of the real number t, that is the greatest integer less or equal to f and f p denotes the p-th derivative function of f .
Whenever the functions f ε and g ε have ([t] + 1)-th derivatives bounded and not vanishing on D ε , they will belong to f ∈ B t ∞∞ . Assumption (5.2) has been set on purpose for this. Once this is obtained, all the functions that are required to lie on B t ∞∞ for the validity of the results, effectively are in that space. All examples we will use in this sections satisfy these conditions, including the following random variables to cite a few : Gaussian, Gamma, Hyperbolic, etc.