Convergence rate in multidimensional irregular sampling ... - Ele-Math

2 downloads 0 Views 175KB Size Report
CONVERGENCE RATE IN MULTIDIMENSIONAL. IRREGULAR SAMPLING RESTORATION. ANDRIY YA. OLENKO AND TIBOR K. POG ´ANY. Abstract.
J ournal of Mathematical I nequalities Volume 3, Number 4 (2009), 567–576

CONVERGENCE RATE IN MULTIDIMENSIONAL IRREGULAR SAMPLING RESTORATION

´ A NDRIY YA . O LENKO AND T IBOR K. P OG ANY Abstract. New magnitude of truncation error upper bound and convergence rate are obtained for Whittaker–Kotel’nikov–Shannon (WKS) sampling restoration sum for Bernstein function class Bqπ ,d , q  1, d ∈ N , when the sampled functions decay rate is unknown. The case of multidimensional irregular sampling is discussed.

1. Introduction The classical WKS sampling theorem has been extended to the case of nonuniform sampling by numerous authors. For detailed information on the theory and its various applications, we refer to [3, 6]. Most known irregular sampling results deal with Paley–Wiener functions which have L2 (R) restrictions on the real line. It seems that the best known nonuniform WKS sampling results for entire functions in L p –spaces were given in [4, 5, 19]. However, there are no explicit truncation error upper bounds in multidimensional WKS reconstructions in open literature. Recently the authors derived multidimensional L p –WKS sampling theorems with precise truncation error estimates, see [11, 12] for more details and discussions. In this paper we use methods developed in [11, 12] to investigate multidimensional irregular sampling in L p -spaces. New magnitude of truncation error upper bound and convergence rate are obtained. 2. Multidimensional Plancherel–P´olya inequality To prove the main theorem we need the multidimensional analog of the Plancherel– P´olya inequality, see [11]. Various multidimensional Plancherel–P´olya inequalities can be found e.g. in Triebel’s book [18]; also, during last years several additional very far going generalizations of the multidimensional Plancherel–P´olya inequality were obtained, among others in [2] and [13]. Unfortunately, no explicit estimates of the Plancherel–P´olya constant appeared neither in these articles, nor in articles referenced therein including [14]. Mathematics subject classification (2000): Primary 94A20, 26D15; Secondary 30D15, 41A05. Keywords and phrases: Whittaker–Kotel’nikov–Shannon sampling restoration formula; approximation/interpolation error level; truncation error upper bound; irregular sampling; multidimensional sampling; convergence rate. The authors were supported by the La Trobe University Research Grant-501821 “Sampling, wavelets and optimal stochastic modelling”.. c   , Zagreb Paper JMI-03-55

567

568

´ A NDRIY YA . O LENKO AND T IBOR K. P OG ANY

In course to expose our estimate upon the multidimensional Plancherel–P´olya constant we make the following few conventions: (i) denote  ·  p the L p -norm in finite case (while  · ∞ ≡ ess sup| · |), and (ii) hereinafter Brσ ,d , r > 0 denotes the Bernstein class [7] of d –variable entire functions of exponential type at most σ = (σ1 , · · · , σd ) coordinatewise whose restriction to Rd is in Lr (Rd ). T HEOREM 1. [11, Theorem 1] Let T = {tn }n∈Zd , tn = (tn1 , ...tnd ) be real separated sequence, i.e. inf |tn − tm |  δ > 0,

n =m

 = 1, d .

Let f ∈ Brσ ,d , r  1 . Then



n∈Zd

where Bd,r =

| f (tn )|r  Bd,r  f rr ,  8 d d er δ σ /2 − 1 ∏ σ δ 2 . rπ =1 

3. Multidimensional irregular sampling In this section we introduce the multidimensional sampling theorem for Brσ ,d functional class. The theorem was proven in [12]. Let T := {tn = n + hn, math f rakh := (h1 , · · · , hn ), N := (N1 , · · · , Nd ) ∈ Nd , while d    Jx := n : (|x j − n j |  N j ) j=1

and

GN j (x j , x j )  (x ,t )(x − t ) , G j nj j=1 N j j n j d

S(x,tn ) = ∏

where GN (x,t) denotes a derivative with respect to t , being   k hk GN (x,t) = (t − h0) sinc(t) ∏ , 1− t − k t k |x−k|N ⎧ ⎨ sin(π t) sinc(t) := πt ⎩1

(1)

(2)

k=0

if t = 0 , if t = 0 .

:= max Denote M = (M1 , · · · , Md ), δ = (δ1 , · · · , δd ), M j=1,d M j . Assume that tn j = n j + hn j , |hn j |  M j , j = 1, d ; for all n ∈ Jx .

569

C ONVERGENCE RATE IN IRREGULAR SAMPLING RESTORATION

T HEOREM 2. [12, Theorem 2] Let f ∈ Bqσ ,d , q  1, σ j  π for all j, T = {tn }n∈Zd be real separated sequence with 1 M 4

< 1 for q = 1 and M 4q

for 1 < q < ∞ .

(3)

Then the sampling expansion f (x) =



GN j (x j , x j ) ,  j=1 GN j (x j ,tn j )(x j − tn j ) d

n∈Zd

f (tn ) ∏

(4)

holds uniformly on each bounded x –subset of Rd . Moreover, the series in (4) converges absolutely too. In this framework the sampling restoration procedure becomes of Lagrange–Yen type [1, 6, 15, 21]. For the general case of multidimensional irregular sampling with window canonical product sampling function S(x,tn ) we have time–jittered nodes outside Jx . Nonvanishing time-jitter h outside Jx leads to functions GN (x,t) given by formulae different from (2), see [4]. However, in irregular sampling applications we would like to approximate f (x) using only it’s values from sample nodes tn indexed by Jx . Therefore we can try to use the truncated to Jx sampling approximation sum YJx ( f ; x) =



n∈Jx

GN j (x j , x j )  (x ,t )(x − t ) G j nj j=1 N j j n j d

f (tn ) ∏

(5)

with GN (x,t) (such that is given by (2)) even for arbitrary sample nodes outside Jx . Under such assumptions the truncation error TN,d ( f ; x)∞ =  f (x) − YJx ( f ; x)∞ coincides with truncation error for the case given by (1)–(4). One can see that TN,d ( f ; x) depends on nonvanishing h in Jx due to multiplicative form of (4) and (5). Therefore, the multidimensional sampling problems are more difficult than the one–dimensional ones. 4. Truncation error upper bounds In this section we obtain universal truncation bounds for multidimensional irregular sampling restoration procedure. The most frequently appearing estimate of the truncation error is of the form TJ ( f ; x)∞ 





n∈Zd \J

|S(x,tn )| p

1/p 



| f (tn )|q

n∈Zd \J

p, q being a conjugated H¨older pair, i.e. 1/p + 1/q = 1 .

1/q

=: A p Bq ,

(6)

570

´ A NDRIY YA . O LENKO AND T IBOR K. P OG ANY

To obtain a class of truncation error upper bounds when the decay rate of the initial signal function is not known one operates with the straightforward Bq  C f ,T  f q where C f ,T is suitable absolute constant. Thus (6) becomes TJ ( f ; x)∞  C f ,T A p  f q . We are interested in estimates for A p such that vanish with |Jx | → ∞. Therefore, the obtained upper bounds really will be universal for wide classes of f (x) and T . We will use the following two-sided bounds for a ratio of gamma functions, see [16, 17, 20]. L EMMA 1. Let z > 0, b ∈ (0, 1). Then z Γ(z + b)  zb .  1−b Γ(z) (z + b)

(7)

satisfy (3), f ∈ Bq , q  1, σ j  π for all j = 1, d . Then T HEOREM 3. Let M σ ,d we have (8) TN,d ( f , x)∞  Kδ (N, M) ·  f q where  8 d/q Kδ (N, M) = qπ 2

d



j=1

(eqπδ j /2 − 1)1/q 2/q

δj

d



 C1 (Nk , Mk )

k=1

   1/p , × ∏ C1 (N j , M j ) + C2(M j , δ )C3 (N j , M j ) d

(9)

j=1 j=k

and C1 (N, M) :=

  p (1 + N)M (N − 1/2) 2M p+2p+1π p(M + 1/2) p N (N + 3/2)2M p ; 1 + 3 p Γ2p (M + 1/2) p−1 N(N − M − 1/2)   4M p  3 p  π 2M p−p e √ ; C2 (M, δ ) := 4M 2 M δ

C3 (N, M) := 2 (M + 3/2)4pM−p +

2 (N + M + 3/2)4pM−p+1 − 2 (M + 3/2)4pM−p+1 . 4pM − p + 1

Proof. As we mentioned earlier, the function YJx ( f ; x) does not depend on samples in T \ {tn : n ∈ Jx } and we assume that outside Jx it is h ≡ 0. Therefore, the structure of {tn j : n j ∈ Jx j } becomes uniform tn j ≡ n j . In this case Theorem 2 guarantees that f (x) admits the representation (4), and the so evaluated model (6) gives us    TN,d ( f ; x) 

d

 

 p 1/p  1/q GN j (x j , x j )  | f (n)|q =: A p ·Bq .  ∑ N j (x j ,tn j )(x j − tn j ) n∈Zd \J

∑ ∏  G

n∈Zd \Jx j=1

x

571

C ONVERGENCE RATE IN IRREGULAR SAMPLING RESTORATION

The multiplicative structure of S(x,tn ) enables to estimate A p in the following way A pp 

=

d

  

 p d p GN j (x j , x j ) GNk (xk , xk )         G (x , n )(x − n ) G (x ,t )(x − t ) j nj k k Nk k k Nj j n j j=1 n j ∈Z k=1 nk ∈Z\Jxk j=k d

  

∑ ∑ ∑ ∑

k=1 nk ∈Z\Jxk

+

∏ ∑

p d GNk (xk , xk )   GNk (xk , nk )(xk − nk ) ∏ j=1 j=k

  

GN j (x j , x j )  n j ∈Jx j GN j (x j ,tn j )(x j − tn j )





n j ∈Z\Jx j

  

p GN j (x j , x j )   GN j (x j , n j )(x j − n j )

 p   .

(10)

 p   N (x,x) Let us estimate ∑n∈Z\Jx  G G(x,n)(x−n)  . Note, that due to our assumptions N

     sin(π x)  G (x, x) N = |ψN (n, x)| :=   GN (x,tn )(x − tn )   π (x − n)

 (t j − x)( j − n)   ∏  j∈Jx (t j − n)( j − x)

n ∈ Z \ Jx .

Hence    (t j − x)( j − n)  (t jx − x)( jx − n)  |ψN (n, x)| =  sinc(x − jx ) ∏ (t j − n)( j − x)   (x − n)(t jx − n) | j−x|N j= jx

where jx denotes the index closest to x , i.e. jx − 0.5  x < jx + 0.5 . Due to |h jx |  M we have

and

 t j − x  M + 1/2  sinc(x − jx ) x  x−n |x − n|

(11)

 j −n M M  x   1+ .    1+ t jx − n |x − n| − M − 1/2 N − M − 1/2

(12)

Due to |h jx |  M and lemma 1 one concludes t − x | j − x| + M   j ∏  j − x   ∏ | j − x|  | j−x|N | j−x|N j= jx

N



j=0

j + 1/2 + M j + 1/2

2

j= jx



2   Γ(1/2) Γ(N + 1 + 1/2 + M) 2 = Γ(M + 1/2) Γ(N + 1 + 1/2) π (N + 3/2)2M .  2 Γ (M + 1/2)

(13)

572

´ A NDRIY YA . O LENKO AND T IBOR K. P OG ANY

Let k(n, x) := max(| jx − n| − N, 1). Then by lemma 1 and |h jx |  M  1/4 we have k(n,x)+2N  j−n  | j − n| k   = ∏  ∏  t − n | j − n| − M k − M | j−x|N j | j−x|N k=k(n,x)



j= jx

j= jx

Γ(k(n, x) + 2N + 1) Γ(k(n, x) − M) = Γ(k(n, x)) Γ(k(n, x) + 2N + 1 − M) k(n, x)  (k(n, x) + 2N + 1 − M)M (k(n, x) − M)(k(n, x))M   4 k(n, x) + 2N + 1 M  . 3 k(n, x)

(14)

Collecting all estimates (11)–(14), we deduce   ψN (n, x) 

1 |x − n|



k(n, x) + 2N + 1 k(n, x)

M

N − 1/2 4π (M + 1/2) (N + 3/2)2M 3Γ2 (M + 1/2) N − M − 1/2

and hence



  

n∈Z\Jx

  p p  1 GN (x, x) k(n, x) + 2N + 1 M p     = ∑ ψN (n, x)  ∑ GN (x, n)(x − n) |x − n| p k(n, x) n∈Z\Jx n∈Z\Jx  p 4π (M + 1/2) N − 1/2 × (N + 3/2)2M p . 3Γ2 (M + 1/2) N − M − 1/2

Thus, we proceed evaluating



n∈Z\Jx

   k(n, x) + 2N + 1 M p 1 2N + 1 M p = ∑ 1 + k(n, x) |x − n| p k(n, x) n∈Z\Jx    ∞ 2 2(2 + 2N)M p 2N + 1 M p + dt 1 + p Np t −N+1 N t M p  ∞  1 1 1 2(2 + 2N)M p Mp + + 2(2N + 1) dt p Np 2N + 1 t − N + 1 N t M p  2(2 + 2N)M p 2(2N + 1)M p 1 1+ + Np (p − 1)N p−1 2N + 1   M p+1 M p (1 + N) 2 N 1+ , p N p−1

1 |x − n| p  =  =



such that gives



n∈Z\Jx

  

p GN (x, x)    C1 (N, M). GN (x, n)(x − n)

Now, let us evaluate ∑n∈Jx |ψN (n, x)| p , the second addend in (10).

573

C ONVERGENCE RATE IN IRREGULAR SAMPLING RESTORATION

We can rewrite the function GN (x,t) into   ∞  t t GN (x,t) = (t − h0) ∏ 1 − 1− , tn t−n n=1 where

 tn , tn = n,

if |x − n|  N else.

Hence all results derived in [19] are valid for our function GN (x,t). By [19, Lemma 1.4.4 (a)] and [19, inequality (3.7)] we obtain 3  π 2M−1 |ψN (n, x)|  4M 2



e √

4M

M δ

(|x − tn| + M + 3/2)4M−1 .

Therefore

∑ |ψN (n, x)|

p

 

n∈Jx

3 4M

 4M p p   π 2M p−p e √ ∑ (|x − tn| + M + 3/2)4pM−p 2 M δ n∈Jx

= C2 (M, δ )

∑ (|x − tn| + M + 3/2)4pM−p .

n∈Jx

Finally for M satisfying (3) we have the estimate   4pM−p  2 (M + 3/2)4pM−p + ∑ (|x − tn| + M + 3/2) n∈Jx

0

4pM−p

= 2 (M + 3/2) such that gives

 

4pM−p

(t + M + 3/2)

(N + M + 3/2)4pM−p+1 − (M + 3/2)4pM−p+1 + 4pM − p + 1 GN (x, x)

p 

∑  G (x,tn )(x − tn) 

n∈Jx

N

N

 dt

 = C3 (N, M) ,

 C2 (M, δ )C3 (N, M).

Therefore, by (10) A pp



d



k=1

   C1 (Nk , Mk ) ∏ C1 (N j , M j ) + C2(M j , δ )C3 (N j , M j ) . d

j=1 j=k

To estimate Bq we use Theorem 1 with σ1 = · · · = σd = π . This results in Bqq

  Bd,q =

8 qπ 2

d

eqπδ j /2 − 1 . δ j2 j=1 d



Now, collecting all these involved estimates, we arrive at (8) and (9).

(15)

574

´ A NDRIY YA . O LENKO AND T IBOR K. P OG ANY

R EMARK 1. Theorem 3 gives new truncation error upper bounds and the method to obtain them, compare with results in [12]. However, it has to be pointed out that there are many different bounds on the so-called Gautschi–Kershaw ratio of two gamma functions [17]. Applying instead of the Lemma 1 other estimates we obtain a set of truncation error upper bounds. We used (7) since its simplicity and elegance. Denote here, and in what follows δ := min j=1,d δ j , δ := max j=1,d δ j . C OROLLARY 3.1. Suppose that the conditions of Theorem 3 are satisfied. Then, we have (N, M)  f q TN,d ( f , x)∞  K δ ,δ where = (N, M) K δ ,δ

 8 d/q qπ 2

 d  C (N , M) 1 k ∑ ∏ C1 (N j , M) d

j=1 j=k

k=1

 d/q   1/p qπδ /2 − 1 eqπδ /2 − 1 e δ )C3 (N j , M) + C2 (M, , . max 2 δ2 δ

(16)

instead of all M j and Proof. If we make use of the estimate (15) for A p with M δ instead of all other δ j , T could contain some additional tn which might causes only increasing A p . It was shown in [12] that   8(eqπδ /2 − 1) 8(eqπδ /2 − 1) 1/d Bd  max , . 2 qπ 2 δ 2 qπ 2 δ Collecting the involved estimates, we arrive at (16). = T HEOREM 4. Suppose that the conditions of Theorem 3 are satisfied. Let N min N j → ∞ in such way that max N j /Nk = O(1). Then it holds

j=1,d

k, j=1,d

 3M p−p+1 TN,d ( f ; x)∞ = O N → 0. → ∞, Proof. According to definitions of C j , j = 1, 3 from Theorem 3, letting N we get     C1 (N, M) = O N 3M p−p+1 ; C2 (M, δ ) = O(1); C3 (N, M) = O 1 + N 4M p−p+1 . Having in mind these facts by (16) we deduce ∼ K p (N, M) δ ,δ

d



k=1

   d      O Nk3M p−p+ ∏ O N 3j M p−p+1 + O 1 + N 4j Mp−p+1 j=1 j=k

C ONVERGENCE RATE IN IRREGULAR SAMPLING RESTORATION

 3M p−p+1 ∼O N

d



j=1

575

      3M p−p+1 O N 3j M p−p+1 + O 1 ∼O N ,

N j =N

→ ∞ if M satisfies (3). The proof is complete. when N R EMARK 2. Theorem 4 improves the results on convergence rates derived in [12]. and the irregular sampling deviNamely, by specifying the sampling size numbers N , involving the Gautschi–Kershaw inequality for the ratio of Gamma ation bound M functions (Lemma 1), and Voss’s upper bound on ψN (n, x), we with respect to the earlier condition • relax the conditions upon M, < M

1 ; q(4d − 1)

• obtain a better rate of convergence, with respect to the earlier one  3M p−p+1 4M(d−1)  O N ·N appearing in [12, Corollary 3.2]. 5. Final remarks Methods proposed by authors give an opportunity to obtain approximation error estimates for wide functional classes without strong assumptions on functions/signals decay rate behaviour. In this article new magnitude of truncation error upper bounds and rates of convergence were obtained in  · ∞ –norm sampling theorem; the case of multidimensional irregular sampling was considered. All presented results and the numerical simulations such that illustrate the achieved results open few new interesting and important problems: 1. To obtain sharp estimates in Theorem 3 (for uniform sampling and p = 2 such sharp estimates were derived in [10]); 2. To obtain the best possible rate of convergence in Theorem 4; 3. To apply the obtained results to irregular sampling restoration for random fields, see [8, 9]. REFERENCES [1] F LORNES K.M., LYUBARSKII Y U . AND S EIP K., A direct interpolation method for irregular sampling, Appl. Comput. Harmon. Anal., 8, 1 (2000), 113–121. [2] H AN Y.–S., Plancherel–P´olya type inequality on spaces of homogeneous type and its applications, Proc. Amer. Math. Soc., 126, 11 (1998), 3315–3327.

576

´ A NDRIY YA . O LENKO AND T IBOR K. P OG ANY

[3] H IGGINS J. R., Sampling in Fourier and Signal Analysis: Foundations, Clarendon Press, Oxford, 1996. [4] H INSEN G., Irregular sampling of bandlimited L p -functions, J. Approx. Theory, 72 (1993), 346–364. ¨ [5] H INSEN G. AND K L OSTERS D., The sampling series as a limiting case of Lagrange interpolation, Appl. Anal., 49 (1993), 49–60. [6] M ARVASTI F., E D ., Nonuniform Sampling: Theory and Practice, Kluwer Academic Publishers, Boston, 2001. [7] N IKOLSKI ˘I S. M., Approximation of functions of several variables and imbedding theorems, SpringerVerlag, New York, 1975. ´ [8] O LENKO A. YA . AND P OG ANY T. K., Direct Lagrange–Yen Type Interpolation of Random Fields, Theor. Stoch. Proc., 9 (25), 3–4 (2003), 242–254. ´ [9] O LENKO A. YA . AND P OG ANY T. K., Sharp exact upper bound for interpolation remainder of random processes, Theor. Probab. Math. Statist., 71 (2004), 133–144. ´ [10] O LENKO A. YA . AND P OG ANY T. K., On sharp exact bounds for remainders in multidimensional sampling theorem, Sampl. Theory Signal Image Process, 6, 3 (2007), 249–272. ´ [11] O LENKO A. YA . AND P OG ANY T. K., Universal truncation error upper bounds in sampling restoration, Georgian Math. J. (to appear). ´ [12] O LENKO A. YA . AND P OG ANY T. K., Universal truncation error upper bounds in irregular sampling restoration, Appl. Anal. (to appear).   [13] P ESENSON I., Plancherel–P´olya type inequalites for entire functions of exponential type in L p Rd , J. Math. Anal. Appl., 330, 2 (2007), 1194–1206. ´ [14] P LANCHEREL M. AND P OLYA G Y, Fonctions enti`eres et int´egrales de Fourier multiples II, Comment. Math. Helv., 10 (1937), 110–163. ´ [15] P OG ANY T. K., Multidimensional Lagrange–Yen interpolation via Kotel’nikov–Shannon sampling formulas, Ukr. Math. J., 50, 11 (2003), 1810–1827. [16] Q I F ENG, A class of logarithmically completely monotonic functions and the best bounds in the first Kershaw’s double inequality, J. Comput. and Applied Math., 206, 2 (2007), 1007–1014. [17] Q I F ENG, Bounds for the ratio of two Gamma functions, RGMIA Res. Rep. Coll., 11, 3 (2008), 1–63. [18] T RIEBEL H., Theory of Function spaces, II, Birkh¨auser Verlag, Basel, 1992. [19] V OSS J. J., Irregular sampling: error analysis, applications, and extensions, In CD-ROM Nonuniform Sampling: Theory and Practice, Kluwer Academic Publishers, Boston, 2001. [20] W ENDEL J., Note on the gamma function, Amer. Math. Monthly, 55 (1948), 563–564. [21] Y EN J. L., On nonuniform sampling of bandwidth limited signals, IRE Trans. Circuit Theory, CT-3 (1956), 251–257.

(Received October 28, 2008)

Andriy Ya. Olenko Department of Mathematics and Statistics La Trobe University Victoria, 3086 Australia e-mail: [email protected] Tibor K. Pog´any Faculty of Maritime Studies University of Rijeka 51000 Rijeka Studentska 2 Croatia e-mail: [email protected]

Journal of Mathematical Inequalities

www.ele-math.com [email protected]