From Nonparametric Density Estimation to ... - Semantic Scholar

1 downloads 0 Views 405KB Size Report
Aug 18, 2015 - Julien Apala N'drin1, Ouagnina Hili2. 1Laboratory of Applied Mathematics and Computer Science, University Felix Houphouët Boigny, Abidjan,.
Applied Mathematics, 2015, 6, 1592-1610 Published Online August 2015 in SciRes. http://www.scirp.org/journal/am http://dx.doi.org/10.4236/am.2015.69142

From Nonparametric Density Estimation to Parametric Estimation of Multidimensional Diffusion Processes Julien Apala N’drin1, Ouagnina Hili2 1

Laboratory of Applied Mathematics and Computer Science, University Felix Houphouët Boigny, Abidjan, Côte d’Ivoire 2 Laboratory of Mathematics and New Technologies of Information, National Polytechnique Institute Houphouët-Boigny of Yamoussoukro, Yamoussoukro, Côte d’Ivoire Email: [email protected], [email protected] Received 17 June 2015; accepted 18 August 2015; published 21 August 2015 Copyright © 2015 by authors and Scientific Research Publishing Inc. This work is licensed under the Creative Commons Attribution International License (CC BY). http://creativecommons.org/licenses/by/4.0/

Abstract The paper deals with the estimation of parameters of multidimensional diffusion processes that are discretely observed. We construct estimator of the parameters based on the minimum Hellinger distance method. This method is based on the minimization of the Hellinger distance between the density of the invariant distribution of the diffusion process and a nonparametric estimator of this density. We give conditions which ensure the existence of an invariant measure that admits density with respect to the Lebesgue measure and the strong mixing property with exponential rate for the Markov process. Under this condition, we define an estimator of the density based on kernel function and study his properties (almost sure convergence and asymptotic normality). After, using the estimator of the density, we construct the minimum Hellinger distance estimator of the parameters of the diffusion process and establish the almost sure convergence and the asymptotic normality of this estimator. To illustrate the properties of the estimator of the parameters, we apply the method to two examples of multidimensional diffusion processes.

Keywords Hellinger Distance Estimation, Multidimensional Diffusion Processes, Strong Mixing Process, Consistence, Asymptotic Normality

1. Introduction Diffusion processes are widely used for modeling purposes in various fields, especially in finance. Many papers

How to cite this paper: N’drin, J.A. and Hili, O. (2015) From Nonparametric Density Estimation to Parametric Estimation of Multidimensional Diffusion Processes. Applied Mathematics, 6, 1592-1610. http://dx.doi.org/10.4236/am.2015.69142

J. A. N’drin, O. Hili

are devoted to the parameter estimation of the drift and diffusion coefficients of diffusion processes by discrete observation. As a diffusion process is Markovian, the maximum likelihood estimation is the natural choice for parameter estimation to get consistent and asymptotical normally estimator when the transition probability density is known [1]. However, in the discrete case, for most diffusion processes, the transition probability density is difficult to calculate explicitly which prevents the use of this method. To solve this problem, several methods have been developed such as the approximation of the likelihood function [2] [3], the approximation of the transition density [4], schemes of approximation of the diffusion [5] or methods based on martingale estimating functions [6]. In this paper, we study the multidimensional diffusion model

dX t = a ( X t , θ ) dt + b ( X t , θ ) dWt , t ≥ 0 under the condition that X t is positive recurrent and exponentially strong mixing. We assume that the diffusion process is observed at regular spaced times tk = k ∆ where ∆ is a positive constant. Using the density of the invariant distribution of the diffusion, we construct an estimator of θ based on minimum Hellinger distance method. Let fθ denote the density of the invariant distribution of the diffusion. The estimator of θ is that value (or values) θˆn in the parameter space Θ which minimizes the Hellinger distance between fθ and fˆn , where fˆn is a nonparametric density estimator of fθ . The interest for this method of parametric estimation is that the minimum Hellinger distance estimation method gives efficient and robust estimators [7]. The minimum Hellinger distance estimators have been used in parameter estimation for independent observations [7], for nonlinear time series models [8] and recently for univariate diffusion processes [9]. The paper is organized as follows. In Section 2, we present the statistical model and some conditions which imply that X t is positive recurrent and exponentially strong mixing. Consistence and asymptotic normality of the kernel estimator of the density of the invariant distribution are studied in the same section. Section 3 defines the minimum Hellinger distance estimator of θ and studies its properties (consistence and asymptotic normality). Section 4 is devoted to some examples and simulations. Proofs of some results are presented in Appendix.

2. Nonparametric Density Estimation We consider the d-dimensional diffusion process solution of the multivariate stochastic differential equation:

dX t = a ( X t , θ ) dt + b ( X t , θ ) dWt , t ≥ 0,

(1)

where {Wt }t ≥ 0 is a standard l-dimensional Wiener process, θ is an unknown parameter which varies in a compact subset Θ of  s , a :  d × Θ →  d is the drift coefficient and b :  d × Θ →  d ×  l is the diffusion coefficient. We assume that the functions a and b are known up to the parameter θ and b is bounded. We denote by θ 0 the unknown true value of the parameter. For a matrix A , the notation At denote the transpose of the matrix A. We will use the notation . to denote a vectorial norm or a matricial norm. The process X t is observed at discrete time tk = k ∆ where ∆ is a positive constant. We make the following assumptions on the model: (A1): there exists a constant C such that a ( x, θ ) − a ( y , θ ) + b ( x, θ ) − b ( y , θ ) ≤ C x − y

(A2): there exist constants M 0 > 0 and r > 0 such that a ( x, θ ) , x ≤ −r x , x ≥ M 0 where .,. denotes the scalar product in  d

(A3): the matrix function b ( x, θ ) is non degenerate, that is inf inf λ t b ( x, θ ) b ( x, θ ) λ > 0, λ ∈  d . t

x

|λ | =1

1593

J. A. N’drin, O. Hili

Assumptions (A1)-(A3) ensure the existence of a unique strong solution for the Equation (1) and an invariant measure for the process { X t } that admits a density with respect to the Lebesgue measure and the strong mixing property for { X t } with exponential rate [10]-[12]. We denote by α the strong mixing coefficient. In the sequel, we assume that the initial value X 0 follows the invariant law; which implies that the process { X t } is strictly stationary. We consider the kernel estimator fˆn ( x ) of fθ ( x ) that is,

 x − Xk  d , x ∈  b n  

n

1 nbnd

fˆn ( x ) =

∑K  k =1

where ( bn ) is a sequence of bandwidths such that bn → 0 and nbnd → +∞ as n → +∞ and K :  d →  + is a non negative kernel function which satisfies the following assumptions: (A4) (1) There exists N1 > 0 such that K (.) ≤ N1 < +∞ ,

∫K ( x ) dx = 1 and x K ( x ) → 0 as x → ∞ , (A5) ∫ui K ( u ) du = 0 and ∫ui2 K ( u ) du < ∞ for i = 1, , d . d

(2)

We finish with assumptions concerning the density of the invariant distribution: (A6) fθ (.) is twice continuously differentiable with respect to x . (A7) θ1 ≠ θ 2 implies that fθ1 ( x ) ≠ fθ2 ( x ) for all x ∈  d . Properties (consistence and asymptotic normality) of the kernel density estimator are examined in the following theorems. The proof of the two theorems can be found in the Appendix. Theorem 1. Under assumptions (A1)-(A4), if the function fθ ( x ) is continuous with respect to x for all θ ∈ Θ , then for any positive sequence ( bn ) such that bn → 0 and nbnd → +∞ as n → +∞ , fˆn ( x ) → fθ ( x ) almost surely. Theorem 2. Under assumptions (A1)-(A6), if ( bn ) is such that nbnd + 4 → 0 as n → +∞ then the limiting distribution of nbnd fˆn ( x ) − fθ ( x ) is N 0,τ 2 ( x ) where

(

)

(

τ

2

)

( x) =

fθ ( x ) ∫d K 2 ( u ) du.

3. Estimation of the Parameter The minimum Hellinger distance estimator of θ is defined by:

(

θˆn = Arg min H 2 fˆn , fθ θ ∈Θ

)

where

(

)

= H 2 fˆn , fθ

{∫

2



}

fˆn1 2 ( x ) − fθ1 2 ( x ) dx

d

12

.

Let  denote the set of squared integrable functions with respect to the Lebesgue measure on  d . Define the functional T :  → Θ as follows: let g ∈  and denote: A ( g= )

{θ ∈ Θ : H ( f , g=) 2

θ

}

min H 2 ( fγ , g ) γ ∈Θ

where H 2 is the Hellinger distance. If A ( g ) is reduced to an unique element, then T ( g ) is defined as the value of this element. Elsewhere, we choose an arbitrary but unique element of A ( g ) and call it T ( g ) . Theorem 3. (almost sure consistency) Assume that assumptions (A1)-(A4) and (A7) hold. If for all x ∈  d , fθ ( x ) is continuous at θ 0 , then for any positive sequence ( bn ) such that bn → 0 and nbnd → +∞ , θˆn converges almost surely to θ 0 as n → +∞ . Proof. By Theorem 1, fˆn ( x ) → fθ0 ( x ) almost surely.

1594

(a

12

Using the inequality

− b1 2

)

(

J. A. N’drin, O. Hili 2

≤ a − b for a, b ≥ 0 , we get

)

∫

H 22 fˆn , fθ =

fˆn1 2 ( x) − fθ1 2 ( x ) dx ≤ ∫ 2

d

d

fˆn ( x ) − fθ ( x ) dx.

Since fˆ ( x ) dx ∫= f ( x ) dx ∫=  n  θ d

(

)

d

1,

H 22 fˆn , fθ0 → 0 almost surely [13] [14].

( )

By theorem 1 [7], T fθ0 = θ 0 uniquely on Θ ; then the functional T is continuous at fθ0 in the Hellinger topology. Therefore θˆn =T fˆn → T fθ0 =θ 0 almost surely.

( )

( )

This achieves the proof of the theorem. Denote ∂gθ ∂ 2 gθ , gθ = ∂θ ∂θ t ∂θ

12 θ f= θ , g

= gθ

when these quantities exist. Furthermore, let

Vθ ( x ) =

{= ∫ g ( x ) g ( x ) dx} g ( x ) and h ( x) 

d

−1

t θ

θ

θ

θ

gθ ( x )

2 fθ1 2 ( x )

.

To prove asymptotic normality of the estimator of the parameter, we begin with two lemmas. Lemma 1. Let En be a subset of  d and denote Enc the complementary set of En . Assume that (1) assumptions (A1)-(A5) are satisfied, (2) hθ0 (.) is twice continuously differentiable with respect to x and nbn2 ∫ fθ0 ( y ) dy → 0 and En

(3)

n ∫E c n

(∫

d

nbn2 ∫

∂ 2 hθ0 ( y ) ∂yi2

En

)

hθ0 ( y + ubn ) K ( u ) du fθ0 ( y ) dy → 0 and

fθ0 ( y ) dy → 0 for i = 1, , d ,

n ∫E c gθ0 ( y ) fθ10 2 ( y ) dy → 0, n

(4)  hθ20+δ ( X 1 ) < ∞ for some δ > 0, then for any positive sequence

∫

d

( bn )

such that bn → 0 , the limiting distribution of

1 nhθ0 ( x ) fˆn ( x ) dx is N ( 0, Γ ) where Γ = ∫gθ0 ( x ) gθt 0 ( x ) dx. 4

The proof can be found in the Appendix. Remark 1. The two dimensional stochastic process (see Section 4) with invariant density

fθ (= x, y )

βσ π

(

)

exp − β x 2 − σ y 2 , β > 0 , σ > 0

En [ wn ; +∞[ × [ wn ; +∞[ a subset of where θ = ( β , σ ) , satisfies the conditions of Lemma 1 with for example =  2 where wn = n . Lemma 2. Let Gn be a compact set of  d and denote by Gnc the complementary set of Gn . Suppose that assumptions (A1)-(A6) are satisfied and: 1 (1) 1 2 d ∫ hθ0 ( x ) fθ−01 ( x ) ∫ d K 2 ( t ) fθ0 ( x − tbn ) dt dx → 0  n bn Gn

(

(2) p , q and r are such that

)

1 1 1 + + = 1 and p q r

1595

J. A. N’drin, O. Hili

1 n b

3 2 d +d p n



12 4 n G n

(3) n b

hθ0 ( x ) fθ0

−1

n∫

(5)

n ∫Gc hθ0 ( x ) n

n

1 q +1 r  hθ0 ( x ) fθ−01 ( x )   ∫d K 2 ( t ) fθ0 ( x − tbn ) dt   dx → 0  

  ∂ 2 f ( x ) 2   1, , d ( x ) 1 +  θ0 2   dx → 0 for i = ∂xi    

gθ0 ( x ) fθ10 2 ( x ) dx → 0

(4)

Gnc

∫G

(∫

d

)

K ( u ) fθ0 ( x + ubn ) du dx → 0

then = Rn

∫

(

)

nhθ0 ( x ) fˆn1 2 ( x ) − fθ10 2 ( x ) dx → 0 in probability as n → +∞.

d

2

The proof can be found in the Appendix. Remark 2. Let Gn = [ −vn ; vn ] × [ −vn ; vn ] a compact set of  2 where

( log ( n ) )

{vn , n ≥ 1}

is a sequence of positive

12

numbers diverging to infinity. Let bn =

n

1 1 1 q 0 , σ > 0

where θ = ( β , σ ) , satisfies the conditions of Lemma 2. Theorem 4. (asymptotic normality) Under assumption (A7) and conditions of Lemma 1 and Lemma 2, if (1) for all x ∈  d , gθ is twice continuously differentiable at θ 0 , (2) the components of gθ0 and gθ0 belong to L2 and if the norms of these components are continuous functions at θ 0 , (3) θ 0 is in the interior of Θ and ∫gθ0 ( x ) gθ0 ( x ) dx is a non-singular matrix, then the limiting distribution of n θˆn − θ 0  is N 0, λ 2 where

(

)

λ2 =

1 4

{∫



d

}

gθ0 ( x ) gθt 0 ( x ) dx

−1

.

Proof. From Theorem 2 [7], we have: n θˆn − θ 0  =

n +

=

{∫

{∫ V ( x )  fˆ ( x ) − f ( x ) dx} n { A ∫ g ( x )  fˆ ( x ) − f ( x )  dx}

d

12 n

 d θ0

n d

12

θ0

12 n

θ0

}

gθ0 ( x ) gθt 0 ( x ) dx

−1

∫

12

θ0

d

ngθ0 ( x )  fˆn1 2 ( x ) − fθ10 2 ( x )  dx

+ An ∫d ngθ0 ( x )  fˆn1 2 ( x ) − fθ10 2 ( x )  dx

where An is a (m × m) matrix which tends to 0 as n → +∞ . We have 2 fˆn1 2 ( x ) − fθ1= ( x) 0

fˆn ( x ) − fθ0 ( x ) 2 fθ10 2 ( x )

( fˆ −

12 n

( x ) − fθ1 2 ( x ) ) 2 fθ1 2 ( x ) 0

0

Denote = Dn

∫

d

ngθ0 ( x )  fˆn1 2 ( x ) − fθ10 2 ( x )  dx.  

We have

1596

2

.

∫

= Dn

=∫ =

ngθ0 ( x )

d

fˆn ( x ) − fθ0 ( x )

gθ0 ( x )

2 fθ10 2 ( x )

dx − ∫d ngθ0

( fˆ ( x)

12 n

( x ) − fθ1 2 ( x ) ) 2 fθ1 2 ( x ) 0

J. A. N’drin, O. Hili 2

dx

0

1 fˆn ( x ) dx − ∫ d ngθ0 ( x ) fθ10 2 ( x ) dx − Rn 2  ( x)



d

n

∫

d

nhθ0 ( x ) fˆn ( x ) dx − Rn

12

2 fθ0

where Rn =

∫

(

)

nhθ0 ( x ) fˆn1 2 ( x ) − fθ10 2 ( x ) dx.

d

2

n θˆn − θ 0  is reduced to

By Lemma 2, Rn → 0 in probability as → ∞ ; then, the limiting distribution of that of

{∫

d

}

gθ0 ( x ) gθt 0 ( x ) dx

−1

∫

d

nhθ0 ( x ) fˆn ( x ) dx

since An → 0 . But

∫

d

1   t → N ( 0, Γ ) with Γ = nhθ0 ( x ) fˆn ( x ) dx  d gθ ( x ) gθ0 ( x ) dx from lemma 1. 4 ∫ 0

Therefore the limiting distribution of

{∫



d

}

gθ0 ( x ) gθt 0 ( x ) dx

−1

∫

d

(

nhθ0 ( x ) fˆn ( x ) dx is N 0, λ 2

)

where

λ2

1 1 {= ∫ g ( x ) g ( x ) dx} 4 ∫ g ( x ) g ( x ) dx 4 {∫  d θ0

−2

t θ0

 d θ0

t θ0

g

 d θ0

( x ) gθt ( x ) dx} 0

−1

.

This completes the proof of the theorem.

4. Examples and Simulations 4.1. Example 1 We consider the two-dimensional Ornstein-Uhlenbeck process solution of the stochastic differential equation

dZ t = AZ t dt + dWt , Z 0 = z0

(2)

where

 −β = A   0

0   , β > 0, σ > 0 −σ 

Let Z = ( X , Y ) and z = ( x, y ) , we have:

 −β a ( z , θ )=   0

0  x  1 0    , b ( z , θ )= b=   and θ= −σ   y  0 1

( β ,σ )

a ( z , θ ) and b ( z , θ ) satisfy assumptions (A1)-(A3). Therefore, Z t is exponentially strong mixing and the invariant distribution µθ admits a density fθ with respect to the Lebesgue measure. µθ N ( 0, Γ ) , the Gaussian distribution on  2 with Γ the unique symmetric solution Furthermore [15], = of the equation is 

C += AΓ + ΓAt 0= where C bbt .

1597

(3)

J. A. N’drin, O. Hili

 1  0   2β . The solution of the Equation (3) is Γ =  1   0  2σ   Therefore [16], the density of the invariant distribution is

βσ

fθ (= x, y )

π

(

exp − β x 2 − σ y 2

)

 The minimum Hellinger distance estimator of θ is defined by:

(

θˆn = Arg min H 2 fˆn , fθ θ ∈Θ

)

where

(

)

= H 2 fˆn , fθ

{∫



}

fˆn1 2 ( x, y ) − fθ1 2 ( x, y ) dxdy 2

2

12

with

fˆn (= x, y )

1 nbn2

n

βσ

 x − X k   y − Yk  x, y )  K1   and fθ (=  bn   bn 

∑K1  k =1

π

(

exp − β x 2 − σ y 2

)

where K1 is a kernel function which satisfies conditions (A4) and (A5) such that K1 ( x ) K1 ( y ) = K ( x, y ) . Let W = W (1) , W ( 2 ) , we can write Equation (2) as follows:

(

)

 dX t   − β =     dYt   0

 dWt (1)  0  Xt    d t +   ( 2)   −σ   Yt   dWt 

which gives the the following system 1 − β X t dt + dWt ( ) dX t =  2 −σ Yt dt + dWt ( ) dYt =

Thus, ( X t )t ≥ 0 and (Yt )t ≥ 0 are two independent univariate Ornstein-Uhlenbeck processes of parameters β and σ respectively. We now give simulations for different parameter values using the R language. For each process, we generate sample paths using the package “sde” [17] and to compute a value of the estimator, we use the function “nlm” [18] of the R language. The kernel function K1 is the density of the standard normal distribution. We use the bandwidth bn =

log ( n )

according to conditions on the bandwidth in the paper. n0.24 Simulations are based on 1000 observations of the Ornstein-Uhlenbeck process with 200 replications. Simulation results are given in the Table 1.

Table 1. Means and standard errors of the minimum Hellinger distance estimator.

(

θˆ = βˆ , σˆ

θ0 = ( β0 ,σ 0 )

)

Means

Standard errors

(0.3, 0.7)

(0.2985977, 0.6998527)

(0.01076013, 0.01032311)

(0.5, 2)

(0.4954066, 1.998997)

(0.0341282, 0.008429909)

(1, 2.4)

(0.9987882, 2.398991)

(0.009604874, 0.01262858)

(1, 3)

(0.998918, 2.999193)

(0.008726621, 0.01034987)

(0.223, 0.6)

(0.2223449, 0.6006928)

(0.01048311, 0.01224315)

1598

J. A. N’drin, O. Hili

In Table 1, θ 0 denotes the true value of the parameter and θˆ denotes an estimation of θ 0 given by the minimum Hellinger distance estimator. Simulation results illustrate the good properties of the estimator. Indeed, the means of the estimator are quite close to the true values of the parameter in all cases and the standard errors are low.

4.2. Example 2 We consider the Homogeneous Gaussian diffusion process [19] solution of the stochastic differential equation

dX t = x0 ( A + BX t ) dt + σ dWt , X 0 =

(4)

where σ > 0 is known, W is a two-dimensional Brownian motion, B is a 2 × 2 matrix with eigenvalues with strictly negative parts and A is a 2 × 1 matrix. By condition on the matrix B, X has an invariant probability = µ N ( m, Γ ) where m = − B −1 A and Γ is the unique symetric solution of the equation

1 0 t C += BΓ + ΓB t 0 = I with I  where C DD= and D σ= . 0 1

(5)

Let

α  β A=  1  and B=  11 α2   β12

β12  with β11 < 0 and β 22 < 0. β 22 

As in [19], we suppose that σ = 2 . In the following, we suppose that β11 β 22 − β122 ≠ 0 . Then we have

B −1 =

 β 22 − β12  1 1 =  and m 2  − β β β11 β 22 − β12  12 β11 β 22 − β122 11 

 α 2 β12 − α1 β 22   .  α1 β12 − α 2 β11 

a b 2 0 Let Γ =  , we have C =  . b d   0 2 −1 a β11 + bβ12 =  C + BΓ + ΓB = 0 ⇔ a β12 + b ( β11 + β 22 ) + d β12 = 0 bβ + d β = −1 22  12 t

 β11  ⇔  β12  0 

β12  β11  Let G  β12 β11 + β 22 =  0 β12  invertible and we have

β12 β11 + β 22 β12

0   a   −1   β12   b  = 0    β 22   d   −1

0   −1  β12  and H =  0  , we have det ( G ) = ( β11 + β 22 ) β11 β 22 − β122 ≠ 0 ; G is  −1 β 22   

a 1   −1 H =  b  G= β11 β 22 − β122 d   

 − β11 Γ is invertible and we have Γ −1 =   − β12

(

 − β 22   − β 22 1   Γ 2   β12  and= β11 β 22 − β12  β12  −β   11 

β12  . − β11 

− β12   . Hence, the invariant density of µ is − β 22 

1599

)

J. A. N’drin, O. Hili

Table 2. Means and standard errors of the estimators. θˆ (MHD)

True values of θ

θˆ (Estimating function)

Means

Standard errors

Means

Standard errors

α1 = 4

3.996942

0.0005203

4.0349

0.2904

α2 = 1

1.00776

0.001311968

1.0035

0.2891

β11 = −2

−2.007696

0.001315799

−2.0155

0.1248

β 22 = −3

−2.982749

0.002923666

−3.0247

0.1978

β12 = 1

1.009081

0.001513984

1.0078

0.1177

= f ( x)

(



)

1 2

t  1  exp  − ( x − m ) Γ −1 ( x − m )  2   det ( Γ )

β11 β 22 − β122

2 2 1   α 2 β12 − α1 β 22  1 α1 β12 − α 2 β11    exp β11  x1 − =  + β 22  x2 −   2 2π β11 β 22 − β122  2 β11 β 22 − β122         α β − α1 β 22   α β − α 2 β11   × exp  β12  x1 − 2 12 x − 1 12  . 2  2  β11 β 22 − β12   β11 β 22 − β122    

For simulation, we must write the stochastic differential Equation (4) in matrix form as follows:  dX t(1)    α1   β11 β12    X t(1)   σ 0   dWt (1)   =   dt +  +         (2)  ( 2)  α β β 22    X t( 2 )   0 σ   dWt   dX t    2   12 (1)  α + β11 X t(1) + β12 X t( 2 )   σ 0   dWt     d t =  1 +   2 (1) ( 2)    0 σ   dWt ( )   α 2 + β12 X t + β 22 X t 

As in [19], the true values of the parameter θ = (α1 , α 2 , β11 , β 22 , β12 ) are θ= 0 Then, we have  dX t(1)    =  dX ( 2 )   t 

 4 − 2 X t(1) + X t( 2 )   2   dt +   0  1 + X (1) − 3 X ( 2 )   t t  

( 4,1, −2, −3,1)

and σ = 2 .

1 0   dWt ( )    2   dWt ( 2 ) 

Now, we can simulate a sample path of the Homogeneous Gaussian diffusion using the “yuima” package of R language [20]. We use the function “nlm” to compute a value of the estimator. We generate 500 sample paths of the process, each of size 500. The kernel function and the bandwidth are those of the previous example. We compare the estimator obtained by the minimum Hellinger distance method (MHD) of this paper and the estimator obtained in [19] by estimating function. Table 2 summarizes results of simulation of means and standard errors of the different estimators. Table 2 shows that the two estimators have good behavior. For the two methods, the means of the estimators are close to the true values of the parameter. But the standard errors of the MHD estimator are lower than those of the estimating function estimator.

References [1]

Dacunha-Castelle, D. and Florens-Zmirou, D. (1986) Estimation of the Coefficients of a Diffusion from Discrete Observations. Stochastics, 19, 263-284. http://dx.doi.org/10.1080/17442508608833428

[2]

Pedersen, A.R. (1995) A New Approach to Maximum Likelihood Estimation for Stochastic Differential Equations Based on Discrete Observations. Scandinavian Journal of Statistics, 22, 55-71.

1600

J. A. N’drin, O. Hili

[3]

Yoshida, N. (1992) Estimation for Diffusion Processes from Discrete Observation. Journal of Multivariate Analysis, 41, 220-242. http://dx.doi.org/10.1016/0047-259X(92)90068-Q

[4]

Aït-Sahalia, Y. (2002) Maximum Likelihood Estimation of Discretely Sampled Diffusions: A Closed-Form Approximation Approach. Econometrica, 70, 223-262. http://dx.doi.org/10.1111/1468-0262.00274

[5]

Florens-Zmirou, D. (1989) Approximate Discrete-Time Schemes for Statistics of Diffusion Processes. Statistics, 20, 547-557. http://dx.doi.org/10.1080/02331888908802205

[6]

Bibby, B.M. and Sørensen, M. (1995) Martingale Estimation Functions for Discretely Observed Diffusion Processes. Bernoulli, 1, 17-39. http://dx.doi.org/10.2307/3318679

[7]

Beran, R. (1977) Minimum Hellinger Distance Estimates for Parametric Models. Annals of Statistics, 5, 445-463. http://dx.doi.org/10.1214/aos/1176343842

[8]

Hili, O. (1995) On the Estimation of Nonlinear Time Series Models. Stochastics: An International Journal of Probability and Stochastic Processes, 52, 207-226. http://dx.doi.org/10.1080/17442509508833972

[9]

N’drin, J.A. and Hili, O. (2013) Parameter Estimation of One-Dimensional Diffusion Process by Minimum Hellinger Distance Method. Random Operators and Stochastic Equations, 21, 403-424. http://dx.doi.org/10.1515/rose-2013-0019

[10] Bianchi, A. (2007) Nonparametric Trend Coefficient Estimation for Multidimensional Diffusions. Comptes Rendus de l’Académie des Sciences, 345, 101-105. [11] Pardoux, E. and Veretennikov, Y.A. (2001) On the Poisson Equation and Diffusion Approximation. I. The Annals of Probability, 29, 1061-1085. http://dx.doi.org/10.1214/aop/1015345596 [12] Veretennikov, Y.A. (1997) On Polynomial Mixing Bounds for Stochastic Differential Equations. Stochastic Process, 70, 115-127. http://dx.doi.org/10.1016/S0304-4149(97)00056-2 [13] Devroye, L. and Györfi, L. (1985) Nonparametric Density Estimation: The L1 View. Wiley, New York. [14] Glick, N. (1974) Consistency Conditions for Probability Estimators and Integrals of Density Estimators. Utilitas Mathematica, 6, 61-74. [15] Jacobsen, M. (2001) Examples of Multivariate Diffusions: The Time-Reversibility, a Cox-Ingersoll-Ross Type Process. Department of Statistics and Operations Research, University of Copenhagen, Copenhagen. [16] Caumel, Y. (2011) Probabilités et processus stochastiques. Springer-Verlag, Paris. http://dx.doi.org/10.1007/978-2-8178-0163-6 [17] Iacus, S.M. (2008) Simulation and Inference for Stochastic Differential Equations. Springer Series in Statistics, Springer, New York. http://dx.doi.org/10.1007/978-0-387-75839-8 [18] Lafaye de Micheaux, P., Drouilhet, R. and Liquet, B. (2011) Le logiciel R: Maitriser le langage-Effectuer des analyses statistiques. Springer-Verlag, Paris. http://dx.doi.org/10.1007/978-2-8178-0115-5 [19] Sørensen, H. (2001) Discretely Observed Diffusions: Approximation of the Continuous-Time Score Function. Scandinavian Journal of Statistics, 28, 113-121. http://dx.doi.org/10.1111/1467-9469.00227 [20] Iacus, S.M. (2011) Option Pricing and Estimation of Financial Models with R. John Wiley and Sons, Ltd., Chichester. http://dx.doi.org/10.1002/9781119990079 [21] Roussas, G.G. (1969) Nonparametric Estimation in Markov Processes. Annals of the Institute of Statistical Mathematics, 21, 73-87. http://dx.doi.org/10.1007/BF02532233 [22] Bosq, D. (1998) Nonparametric Statistics for Stochastic Processes: Estimation and Prediction. Second Edition, Springer-Verlag, New York. http://dx.doi.org/10.1007/978-1-4612-1718-3 [23] Dharmenda, S. and Masry, E. (1996) Minimum Complexity Regression Estimation with Weakly Dependent Observations. IEEE Transactions on Information Theory, 42, 2133-2145. http://dx.doi.org/10.1109/18.556602 [24] Dominique, F. and Aimé, F. (1998) Calcul des probabilities: Cours, exercices et problèmes corrigés. 2e edition, Dunod, Paris.

1601

J. A. N’drin, O. Hili

Appendix A1. Proof of Theorem 1 Proof.

( fˆ ( x ) − fˆ ( x )) + ( fˆ ( x ) − f ( x) ) ≤ ( fˆ ( x ) − fˆ ( x ) ) + ( fˆ ( x ) − f ( x) )

fˆn ( x ) − fθ ( x ) =

n

n

n

n

θ

n

θ

n

We have: Step 1:

 1 n  x − X k  1  x − X1  = fˆn ( x ) = K   d ∑K    d  bn   nbn k =1  bn   bn = =

 x−u   fθ ( u ) du  bn 

1 bnd

∫ K 

1 bnd

∫ K  b

d

d

 t   fθ ( x − t ) dt → fθ ( x ) .  n

by Theorem 2.1 [21]. Hence

fˆn ( x ) − fθ ( x) → 0.

(6)

Step 2:

1 n fˆn ( x ) − fˆn ( x ) = ∑Yk nbnd k =1 where

 x − Xk = Yk K   bn  

  x − Xk   − K     bn 

 (Yk ) = 0 Yk ≤ 2 N1 Then by theorem 2.1 [9], we have for all  > 0    1 n  1 n   2 nbn2 d >     ∑Yk > bnd  ≤ 2C exp  −   d ∑Yk=  2N1bnd =  nbn k 1 =  n k 1   2   Y1 2 +  3  

  .    

We have 2

 Y1

2

  x − X1   x − X1   x − X1   2  x − X1  = K   − K   =K   −  K     bn   bn   bn    bn   2

 1  x − X1   2d  1  x − X1   = b  d K 2    − bn  d K    b  bn    bn    n  bn 2 1  x − X1  d  1  x − X1    d  = bnd  d K 2  − b K  n  d     = bn An b b b  bn   n   n    n  d n

1602

2

J. A. N’drin, O. Hili

where An → fθ ( x ) ∫d K 2 ( u ) du.

Then    1 n   2 nbn2 d   d ∑Yk >   ≤ 2C exp  −  2N1bnd  nbn k =1   2  bnd An +  3  

      2 d  nb  ≤ 2C exp  − n . 2N1         2  An + 3      

Therefore

fˆn ( x ) − fˆn ( x ) → 0 almost surely,

(7)

by the Borel-Cantelli’s lemma. (6) and (7) imply that

fˆn ( x ) → fθ ( x ) almost surely. This achieves the proof of the theorem.

A2. Proof of Theorem 2 Proof.

(

)

(

nbnd fˆn ( x ) − fθ (= x)

)

(

)

nbnd fˆn ( x ) − fˆn ( x ) + nbnd fˆn ( x ) − fθ ( x ) .

(1) By making the change of variable t =

x−u and using assumptions (A4) and (A5), we get: bn

(

nbnd fˆn ( x ) − fθ ( x )

)

1  x−u    nbnd  ∫d d K   fθ ( u ) du − fθ ( x )   bn  bn  

= =

nbnd

{∫

d

K ( t )  fθ ( x − tbn ) − fθ ( x )  dt

}

  1 d ∂ 2 fθ ( x )   2 nbnd  ∫d K ( t )  ∑ ti t j ( −bn ) + o bn2  dt     2! i , j ∂xi ∂x j 

( )

=

d ∂2 f  1  θ ( x) 2 nbnd + 4  ∫d ∑ ti K ( t ) dt + o (1)  2 ∂xi i =1  2 

=

2  1 d ∂ fθ ( x )  2 as n → +∞. = nbnd + 4  ∑ d ti K ( t ) dt + o (1)  → 0 ∫ 2   2 i =1 ∂xi 

(2)

(

) ( )

n

−1 2 nbnd fˆn ( x ) − fˆn ( x ) = nbnd ∑Yk k =1

where

 x − Xk = Yk K   bn

1603

  x − Xk  − K    bn

 . 

J. A. N’drin, O. Hili

We have  (Yk ) = 0 and Yk ≤ 2 N1 .

Let p = p ( n ) , q = q ( n ) and r = r ( n ) be positive integers which tend to infinity as n → ∞ such that r ( p + q ) ≤ n < r ( p + q + 1) . Define U m and Vm by ( m −1)( p + q ) + p

m( p + q )

= ∑ Yk , Vm

Um =

= ∑ Yk , m 1, , r

k = ( m −1)( p + q ) +1

k = ( m −1)( p + q ) + p +1

and

Vr +1 =

n



Yk .

k = r ( p + q ) +1

We have n

r +1

r

= ∑Yk

∑U m + ∑Vm .

= k 1= m 1 = m 1

Step 1: We prove that

−1 2 r +1

( nb ) ∑V d n

m =1

m

→ 0 in probability.

By Minkowski’s inequality, we have 12

2   −1 2 r +1 −1 2  r 2 12 2 12 V ≤ nbnd  Vm +  Vr +1   nbnd ∑ ∑ m      = m 1= m 1    −1 2  2 12 2 12 r V V ≤ nbnd  +  1 1 r +  

(

)

(

)

(

)

(

) (

(

)

) (

)

(1) Using Billingsley’s inequality [22], 2

 p+q  2  V1  ∑ Yk  q Y12 + 2∑ (YiY j ) = = p +1 i< j  k=  p + q −1 p + q −1   ≤ q Y12 + 2q ∑  (Yp +1Y j +1 ) ≤ q   Y12 + 32 N12 ∑ α ( j − p )  j= p +1 j= p +1  

( )

( )

( )

q −1 ∞    Y12 + 32 N12 ∑α ( j ) . ≤ q   Y12 + 32 N12 ∑α ( j )  ≤ qC with C = = j 1= j 1  

( )

( )

(2)  Vr +1

2

 =   k= 

2

 ∑ Yk  =( n − r ( p + q ) )  Y12 + 2∑ (YiY j ) ≤ rC. r ( p + q ) +1  i< j n

( )

Hence,    nbnd  

(

2 12

   

−1 2 r +1

) ∑V m =1

m

(

≤ C nbnd

) (r −1 2

q+ r

)

= q q= Therefore, choosing ( n ) , r r ( n ) and bn such that

r q nbnd

→ 0 as n → ∞,

we get

(

 nb

d n

−1 2 r +1

) ∑V

1604

m =1

m

2

→ 0.

(8)

J. A. N’drin, O. Hili

which implies that −1 2 r +1

( nb ) ∑V d n

( nb ) ∑U

→ 0 in probability.

r

−1 2

d n

Step 2: asymptotic normality of

m

m =1

m =1

m

.

U m , m = 1, , r have the same distribution; so that r

∏ exp ( itU m ) = (  exp ( itU1 ) )

r

.

m =1

From Lemma 4.2 [23], we have   r  r   r  r  exp  it ∑U m   −   exp ( itU1 )  =  exp  it ∑U m   − ∏ exp ( itU m ) =  m 1=   m 1   m =1    r  r =   ∏ exp ( itU m )  − ∏ exp ( itU m ) ≤ 4 ( r − 1) α (1 + q ) ≤ 4rα ( q ) . =  m 1=  m1

Setting φ1 ( t ) =  exp ( itU1 ) . If q = q ( n ) and r = r ( n ) are chosen such that

rα ( q ) → 0 as n → ∞, the charasteristic function of

( nb ) ∑U d n

((

r

−1 2

m =1

is φ1r t nbnd

m

)

−1 2

)

(9) r

which is the charasteristic function of

m =1

where Z m , m = 1, , r are independent random variables with distribution that of We have  ( Z m ) = 0 and 2

 r  Zm  r= =  ∑  Z12 m 1  =

( ) ( nb ) d n

( )

−1

2r rp  Y12 + d d nbn nbn

( ) ( nb )

r=  U12

d n

∑Z m

−1

( nb ) d n

−1 2

U1 .

p   r  p Y12 + 2∑ (YiY j )  i< j  

( )

∑ (YiY j ) . p

i< j

2  1  x − X1    rp rp  2 1  2  x − X1  d 2 (1)  (Y1 ) =  d K   − bn  d K     → fθ ( x ) ∫d K ( u ) du. d n  bn bn  b nbn b n    n      ϕ ( k ) with λ > 0 . (2) Note that α ( k ) ≤ exp ( −λ k ) =

2r nbnd

p −1

p −1

∑  (YiY j ) ≤ nbd ∑ j  (Y1Y j +1 ) ≤ nbd 16 N12 ∑ jα ( j ) 2r

i< j

2r

= n j 1

n =j 1



32 N r 32 N12 r p −1 p −1 α (l ) ≤ ∑∑ ∑∑α ( l ) nb k= 1 =l k nbnd k= 1 =l k



12 12 32 N12 r p −1 p −1 ∑∑ (ϕ ( l ) ) (ϕ ( k ) ) nbnd k= 1 =l k

2 1 d n

p −1 p −1

∞ 12 12 32 N12 r ∞ ϕ ( k ) ) ∑ (ϕ ( l ) ) ( ∑ d nb k 1 =l 1 = n



2

12 32 N12 r  ∞ r ≤ → 0 as n → ∞.  ∑ (ϕ ( i ) )  → 0 if d nbn  i =1 nbnd 

Therefore

1605

(10)

J. A. N’drin, O. Hili 2

 r    ∑Z m  → fθ ( x ) ∫ d K 2 ( u ) du as n → ∞.   m =1  Since the random variables U m ( m = 1, , r ) have the same distribution, then by Lyapunov’s theorem [24],

( nb ) ∑U

(

r

−1 2

d n

the limiting distribution of

m =1

is N 0,τ 2 ( x )

m

)

where

τ 2 ( x ) = fθ ( x ) ∫d K 2 ( u ) du. The condition (8), (9) and (10) are satisfied, for example, with

log ( n ) n 3 − n1 4 , q ( n ) ∼= n1 4 and bnd with 0 < λ < . λ log ( n ) 4 n

r ( n ) ∼ log ( n ) , p ( n ) ∼

This achieves the proof of the theorem.

A3. Proof of Lemma 1 Proof. The proof of the lemma is done in two steps. Step 1: we prove that

Yn =

1 n   n  ∫d hθ0 ( x ) fˆn ( x ) dx − ∑hθ0 ( X i )  → 0 in probability. n i =1  

( x)

1 nb

=  Yn

n∫



=

n∫

∫ hθ ( x ) bd

h

n

 x− y 1 n  dx − ∑h n  b 

∑K 

θ0 d  d  d θ0 = i 1 n n i 1=

1

d

d

n

n∫

=

0

+ n∫

Enc

fθ0 ( y ) dy

∫ hθ ( y + ubn ) K ( u ) du − hθ ( y )

fθ0 ( y ) dy

d

fθ0 ( y ) dy

 x− y K  dx − hθ0 ( y ) fθ0 ( y ) dy  bn 

∫ hθ ( y + ubn ) K ( u ) du − hθ ( y ) d

En

( y)

0

0

0

0

= I1n + I 2 n .

With assumptions (A4) and (A5), we have

= I1n = =

 hθ0 ( y + ubn ) − hθ0 ( y )  K ( u ) du fθ0 ( y ) dy

n ∫E

∫

n∫

 1 d ∂ 2 hθ0 ( y )  ui u j bn2 + o bn2  K ( u ) du fθ0 ( y ) dy  ∫d  2! ∑   i , j ∂yi ∂y j

n

( ) ( )

En

nbn2 ∫

En

≤ nb

d



2 n E n

∫

d

2 1 d ∂ hθ0 ( y ) 2 ui K ( u ) du + o (1) fθ0 ( y ) dy ∑ 2 i =1 ∂yi2

2 1 d ∂ hθ0 ( y ) 2 ∑ d ui K ( u ) du + o (1) fθ 0 ( y ) dy ∫ 2  2 i =1 ∂yi

 1 d ∂ 2 hθ ( y ) 0 ≤ nbn2 ∫ fθ0 ( y )  ∑ 2 En 2 y ∂ = 1 i i  → 0 as n → +∞. Furthermore,

1606

(∫



d

 ui2 K ( u ) du + o (1)  dy 

)

n∫

I 2 n=

Enc

∫ hθ ( y + ubn ) K ( u ) du − hθ ( y ) d

0

{∫ ( ∫ n {∫ ( ∫

≤ n ≤

0

) ( y + ub ) K ( u ) du ) f

fθ0 ( y ) dy

Enc

d

hθ0 ( y + ubn ) K ( u ) du fθ0 ( y ) dy + ∫

Enc

d

hθ0

Enc

θ0

n

J. A. N’drin, O. Hili

( y ) dy + ∫E

c n

} ( y ) dy}

hθ0 ( y ) fθ0 ( y ) dy gθ0 ( y ) fθ10 2

→ 0 as n → +∞.

Therefore

1 n   Yn = n  ∫d hθ0 ( x ) fˆn ( x ) dx − ∑hθ0 ( X i )  = I1n + I 2 n → 0 in probability. n i =1   n

1

∑hθ ( X i ) , n

Step 2: asymptotic normality of

i =1

0

θ0 ∈  s , s ≥ 1

(1) θ 0 ∈  Proof is similar to that of theorem 2; we use the inequality of Davidov [22] instead of that of Billingsley. Note that:

(

1 1 h ( x ) f ( x ) dx = g ( x ) f ( x ) dx ) ∫= ∫ 2 4∫

=  hθ0 ( X 1 )

 d θ0

θ0

12 θ0

 d θ0

d

2 gθ0 ( x ) gθ0 ( x ) dx 0. =

and

(

) ∫

=  hθ20 ( X 1 )

1 1 2  t = d gθ0 ( x ) dx d gθ ( x ) gθ0 ( x ) dx. ∫  4 4 ∫ 0

h = ( x ) fθ0 ( x ) dx

2  d θ0

(2) θ 0 ∈  s , s > 1   → X if and only if u t X n  → u t X for all u ∈  s . Recall that X n  gθ0 1 n s = X Let u ∈  , Yi h= and Tn = ∑Yi , the real random variables θ0 ( X i ) 12 ( i) 2 fθ0 n i =1

( u Y , i ≥ 1) t

are

i

(

)

strongly mixing with mean zero and variance u t Γu where Γ is the covariance matrix of Y1 ; Γ = Y1Y1t . 1 n t  From= (1), u t Tn → N ( 0, u t Γu ) . ∑u Yi  n i =1 Therefore, 1

n

 → N ( 0, Γ ) ∑hθ ( X k )  n

where Γ =

0

k =1

1  t d gθ ( x ) gθ 0 ( x ) dx. 4 ∫ 0

This completes the proof of the lemma.

A4. Proof of Lemma 2 Proof.

) ( x ) ) dx + ∫ ( x )) dx + ∫ ( x ))

nhθ0 ( x ) fˆn1 2 ( x ) − fθ10 2 ( x ) dx

n

nhθ0

∫

=

∫G

=

( ( x ) ( fˆ ( x ) − f fˆ ( x ) − f ( ( x) ( fˆ ( x ) + f

d

= Rn

∫G

n

nhθ0

2

12 n

θ0

n

θ0

12 n

(

)

(

)

Gnc

nhθ0 ( x ) fˆn1 2 ( x ) − fθ10 2 ( x ) dx

Gnc

nhθ0 ( x ) fˆn1 2 ( x ) − fθ10 2 ( x ) dx

2

12

2

2

12

θ0

2

= Rn1 + Rn 2 .

1607

2

J. A. N’drin, O. Hili

We have, Rn1 ≤ ∫

Gn

n hθ0 ( x )

( x ))

( fˆ ( x ) − f

2

n

θ0

12 n

( x ) + fθ1 2 ( x ) )

( fˆ

dx ≤ ∫G

2

n hθ0

n

( fˆ ( x ) − f ( x)

θ0

n

fθ0 ( x )

0

( x ))

2

dx.

Now,  ( Rn1 ) ≤ ∫

Gn

(

)

hθ0 ( x ) fθ−01 ( x ) n  fˆn ( x ) − fθ0 ( x ) dx 2

(

) (

)

2 2 ≤ ∫G hθ0 ( x ) fθ−01 ( x ) n   fˆn ( x ) − fˆn ( x ) + fˆn ( x ) − fθ0 ( x )  dx  n 

(1)

(

n  fˆn ( x ) − fˆn ( x ) =

( nb )

=

2

n

( nb ) d n

2

 x − Xi  n    ∑Yi  = with Yi K   i =1   bn 2

n d n

)

2



1 n b



1 n3 2 bn2 d

3 2 2d n

  x − Xi   − K     bn 

n   2  n Y1 + 2∑ (YiY j )  i< j  

( )

n   2  n Y1 + 2∑  (YiY j )  i< j   − 1 n   2  x − X1   n K   + 2∑ j  (Y1Y j +1 )  j =1  bn   

( )

Using Davidov’s inequality for mixing processes, we get n −1



n −1

∑ j  (Y1Y j +1 ) ≤ ∑ j  2 p ( 2α ( j ) )

1 p

(

≤ 2 p  Y1

r 1r

q 1q

j +1

1



=j 1 =j 1

( Y ) ( Y )

) ( Y )

n −1 r 1r

q 1q

  

∑ j ( 2α ( j ) )

1

1 p

j =1

Choose q ≥ 2 and r ≥ 2 , we obtain

∑ j  (Y1Y j +1 ) ≤ 2 p (  Y1 n −1

) ( Y

)

n −1 r −2 1 r 2 1 1 j 1 =j 1

(

2

≤ 2 p ( 2 N1 )

Y1

q−2 1 q

Y

∑ j ( 2α ( j ) )

1 p

) (( 2N ) ) (  Y ) (  Y )

q−2 1 q

r −2 1 r

1

n −1 2 1r

2 1q

1

1

( ) (( 2N ) ) (( 2N ) ) (  Y )

≤ 2 p 21 p

q−2 1 q

1

1 q +1 r

  x − X1  ≤ C  K 2     bn  

1 q +1 r

  x − X1  ≤ C  K 2     bn  

Hence,

1608

1 p

j =1

n −1 2 1 q +1 r

r −2 1 r

1

∑ j ( 2α ( j ) )

1

n −1

∑ j (ϕ ( j ) )

∑ j (α ( j ) )

1 p

j =1

where ϕ ( j ) = (α ( j ) ) 1 p

j =1

2

1 q +1 r

  n −1 1 2  C1  2  x − X1   K   ∑ϕ ( i )  ≤  2   bn    j =1 

.

(

)

n  fˆn ( x ) − fˆn ( x ) 1 ≤ 3 2 2d n bn

J. A. N’drin, O. Hili 2

  C1 2  x − X1   n K   + 3 2 2d   bn   n bn

1 q +1 r

  2  x − X1   K     bn  

1 q +1 r

1  1  C1 2  x − X1  2  x − X1   d K    + 3 2 d + d p  d K    bn  bn  bn   n bn  bn   1 q +1 r C 1 ≤ 1 2 d ∫d K 2 ( t ) fθ0 ( x − tbn ) dt + 3 2 d1+ d p  ∫d K 2 ( t ) fθ0 ( x − tbn ) dt  . n bn n bn 1 ≤ 12 d n bn

(2)

(

n fˆn ( x ) − fθ0 ( x )

)

2

 1 d ∂ 2 fθ0 ( x )   2  n ∫ d K (t )  ∑ ti t j ( −bn ) + o bn2  dt    2! i , j ∂xi ∂x j   

2

( )

=

d ∂ 2 f ( x)  1  θ0 2 d 1 = n b  ∫ d∑ t K t t + o ( ) ( )  i  ∂xi2 i =1  2 

2

12 4 n

 1 d ∂ 2 fθ0 ( x )  2 = n b  ∑ d ti K ( t ) dt + o (1)  ∫ 2   2 i =1 ∂xi 

2

12 4 n

 d  ∂ 2 f ( x ) 2 1 θ0 ≤ 2dn b  ∑    ∂xi2  4 = 1 i    12 4 n

(∫



2 d ti K ( t ) dt

)

2

  + o (1)  . 

Therefore,  ( Rn1 ) ≤

1 n b

12 d n

+

∫G

n

C n b

hθ0 ( x ) fθ−01 ( x )

1 3 2 d +d p n

∫G

n

(∫

d

)

K 2 ( t ) fθ0 ( x − tbn ) dt dx

1 q +1 r   hθ0 ( x ) fθ−01 ( x )   ∫G K 2 ( t ) fθ0 ( x − tbn ) dt   dx n    

 d  ∂ 2 f ( x ) 2 1 θ0 + 2dn1 2bn4 ∫G hθ0 ( x ) fθ−01 ( x )  ∑    n 4 ∂ xi2  i =1    → 0 as n → +∞.

(

2 ∫d ti K ( t ) dt

)

2

  + o (1)  dx  

The last relation implies that Rn1 → 0 in probability as n → +∞.

Furthermore,

(

)

Rn 2 ≤ n ∫ c hθ0 ( x ) fˆn1 2 ( x ) − fθ10 2 ( x ) dx G n

(

2

)

≤ 2 n ∫Gc hθ0 ( x ) fˆn ( x ) + fθ0 ( x ) dx n

≤ 2 n ∫Gc hθ0 ( x ) fθ0 ( x ) dx + 2 n ∫Gc hθ0 ( x ) fˆn ( x ) dx n

≤ 2 n ∫Gc n

n

gθ0 ( x ) fθ10 2 ( x ) dx + 2 n ∫Gc hθ0 ( x ) fˆn ( x ) dx n

≤ 2 n ∫Gc gθ0 ( x ) fθ0

12

n

( x ) dx + Rn 22 .

1609

(11)

J. A. N’drin, O. Hili

We have,

(

)

 ( Rn 22 ) = 2 n ∫Gc hθ0 ( x )  fˆn ( x ) dx n

= 2 n∫

Gnc

= 2 n∫

Gnc

 hθ0 ( x )  ∫ d    hθ0 ( x )  ∫ d  

= 2 n ∫Gc hθ0 ( x ) n

{∫

d

1 nbnd

n

  x− y  fθ0 ( y ) dy  dx   bn 

∑K  i =1

 1  x− y K  fθ0 ( y ) dy  dx d bn  bn  

}

K ( u ) fθ0 ( x + ubn ) du dx

Therefore, if

n ∫Gc gθ0 ( x ) fθ10 2 ( x ) dx → 0 and n

n ∫Gc hθ0 ( x ) n

{∫

d

}

K ( u ) fθ0 ( x + ubn ) du dx → 0

then

Rn 2 → 0 in probability as n → +∞. (11) and (12) imply that Rn =

∫

d

(

)

nhθ0 ( x ) fˆn1 2 ( x ) − fθ10 2 ( x ) dx → 0 in probability as n → +∞.

1610

2

(12)