Bessel Phase Functions: Calculation and Application

1 downloads 0 Views 268KB Size Report
Sep 9, 2016 - x2 d2 dx2 + x d dx. + (x2 − ν. 2). ] M2 ν(x). = 8 π. 2. ∫ ∞. 0. [. (2x sinh t). 2. K0 (2x sinh ... (x2 − ν. 2) d dx. M2 ν(x) = 8x π. 2. ∫ ∞. 0. {. 2ν sinh3t sech t sinh(2νt). sech2t ... Both terms in the braces are negative, and so φν (x) > 0 for x > ν > 0. .... 2. 24(x2 − ν. 2). 3/2. +. 375x6 + 3654x4 ν. 2 + 1512x2 ν. 4 − 16ν. 6.
manuscript No. (will be inserted by the editor)

Bessel Phase Functions: Calculation and Application David E. Horsley

Sep 2016

Abstract The Bessel phase functions are used to represent the Bessel functions as a positive modulus and an oscillating trigonometric term. This decomposition can be used to aid root-finding of certain combinations of Bessel functions. In this article, we give some new properties of the modulus and phase functions and some asymptotic expansions derived from differential equation theory. We find a bound on the error of the first term of this asymptotic expansion and give a simple numerical method for refining this approximation via standard routines for the Bessel functions. We then show an application of the phase functions to the root finding problem for linear and cross-product combinations of Bessel functions. This method improves upon previous methods and allows the roots in ascending order of these functions to be calculated independently. We give some proofs of correctness and global convergence. Keywords Bessel functions · phase functions · cross products · root-finding Mathematics Subject Classification (2000) 33C10 · 33F05 · 65H05

1 Introduction In this article, in the convention of [1, §10.18], we define, for x > 0 and ν ≥ 0, the modulus and phase functions via (1)

Mν ( x )eiθν ( x) = Hν ( x ), Nν ( x )eiφν ( x) =

(1) 0 Hν ( x ),

(1) (2)

(1)

where Hν ( x ) = Jν ( x ) + iYν ( x ) is the Hankel function and primes indicate differentiation with respect to the argument. The phase functions, θ and φ are real and D.E. Horsley School of Physical Sciences, University of Tasmania, Private Bag 37, Hobart, Tasmania 7004, Australia. E-mail: [email protected]

2

David E. Horsley

continuous, with branches fixed by θ → −π/2 and φ → π/2 as x → 0+ . Equivalently, we can define the phase functions by θν ( x ) = arctan(Yν ( x )/Jν ( x )),

(3)

φν ( x ) = arctan(Yν0 ( x )/Jν0 ( x )),

(4)

where the branch of arctangent is chosen to ensure continuity. DLMF [1, §10.18], and its print companion [2, §10.18], provide an extended list of properties of these functions. These functions have applications in root finding for Bessel and related functions. For example, if we take a linear combination of Bessel functions,

Cν ( x ) = Jν ( x ) cos(πt) + Yν ( x ) sin(πt) = Mν ( x ) cos(θν ( x ) − πt)

(5)

the roots occur when the argument of the cosine passes through an odd multiple of π/2. McMahon [3] used this kind of argument to find asymptotic expansions for the roots of Bessel and related functions. However, outside asymptotic expansions, the literature does not contain much use for these functions. With this article, we argue that these functions can be of considerable use in numerical root finding, especially for cross-product functions. We give some new properties of the phase functions and present a simple method to evaluate them numerically. This allows us to construct a practical root finding method for linear combinations of Bessel functions and their cross-products. Our Bessel function cross-product method significantly improves upon previous methods, particularly when roots of functions of high order are desired. There are a great number of methods for calculating the roots of the Bessel functions; however, few are concerned with linear combinations of such functions. A notable exception is the method of Segura [4], where a globally convergent algorithm was given. While the method presented there always converges to ensure the mth root is attained sufficiently accurately, an initial approximation to that root must be provided. The method we give in Section 6 has the advantage that it produces a simple algorithm which we can prove converges to the mth root in ascending for any initial positive approximation — at least if the calculations are performed in exact arithmetic. There are even fewer methods available for calculating the roots of the crossproduct functions — that is, functions such as f ( x ) = Yν (λx ) Jν ( x ) − Jν (λx )Yν ( x ). Previous methods of calculating the roots of these functions involved using McMahon’s expansion as an initial guess and applying a root solver the functions directly. This behaves well for small order but poorly for large. Sorolla and Mattes [5] presented a method of calculating these roots which used roots at low order as initial approximations to higher order roots. Using the phase functions provides a significant improvement over this method as calculation of a root does not require knowledge of any other roots, and convergence can be guaranteed for a wide range of roots. Our results are primarily focused on the real roots of these functions; for a modern method for finding the complex roots Bessel function, see another recent article by Segura [6]. Segura’s method applied similar ideas to this paper, but instead along anti-Stoke-lines in the complex plane.

Bessel Phase Functions: Calculation and Application

3

2 Analytic properties We first review some properties of the phase functions and prove some new results which we will use in later sections. Theorem 1 The phase function θν ( x ) is a monotone increasing function of x, ν > 0, and φν ( x ) is a monotone increasing function of x for x > ν and is monotone decreasing for x < ν. Proof This result follows directly from the derivative identities in [1, §10.18.7–8]: Jν ( x )Yν0 ( x ) − Jν0 ( x )Yν ( x ) 2 = , Jν2 ( x ) + Yν2 ( x ) πxMν2 ( x )

(6)

2( x 2 − ν2 ) Jν0 ( x )Yν00 ( x ) − Jν00 ( x )Yν0 ( x ) = . Jν02 ( x ) + Yν02 ( x ) πx3 Nν2 ( x )

(7)

θν0 ( x ) = φν0 ( x ) =

Theorem 2 θν ( x ) is a convex function of x > 0, that is θν0 ( x ) is monotone increasing for ν > 12 , and concave for ν < 21 . φν ( x ) is convex for x > ν > 0. Proof The proof relies on the identities given in the previous theorem, as well as some integral representations of the modulus functions. To prove the convexity of θν ( x ), we use Nicholson’s integral [1, §10.9.30]: Mν2 ( x ) = Jν2 ( x ) + Yν2 ( x ) =

8 π2

Z ∞

K0 (2x sinh t) cosh(2νt) dt,

0

(8)

where K0 (z) is the modified Bessel function of the second kind. Next we use the result of Watson [7, pg.446], which follows from integrations by parts on Nicholson’s integral to give Z ∞ i d h 8 xMν2 ( x ) = 2 (tanh t − 2ν tanh 2νt) tanh t K0 (2x sinh t) cosh(2νt) dt. (9) dx π 0 To determine the sign of the integral, note that differentiation shows, for t > 0 d dλ λ tanh( λt ) > 0. It follows that tanh t − 2ν tanh 2νt =

Z 1 d 2ν



λ tanh(λt) dλ

(10)

andhless thani zero for ν > 21 . Since all other terms in d the integral in (9) are positive, dx xMν2 ( x ) is also greater than zero for ν < 12 and is greater than zero for ν
12 . Lastly, from Equation (6), we have θν00 ( x ) = −

2 πx2 Mν4 ( x )

i d h xMν2 ( x ) , dx

(11)

and so θν00 ( x ) > 0 for ν > 21 , and θ 00 ( x ) < 0 for ν < 12 . Our proof of the convexity of φ is more involved. We first derive an equivalent of Nicholson’s integral for Nν2 ( x ) = Jν02 ( x ) + Yν02 ( x ). We follow a similar method to Watson [7, pg. 445] by noting h i 2 2 2 2 2 2 2 2 d2 d x2 dx (12) 2 + x dx + ( x − ν ) Mν ( x ) = 2x Nν ( x ) − ( x − ν ) Mν ( x ).

4

David E. Horsley

Differentiation under the integral sign on Nicholson’s integral gives the left–hand side as i h 2 2 2 d2 d x2 dx 2 + x dx + ( x − ν ) Mν ( x ) Z ∞h 8 (13) = 2 (2x sinh t)2 K000 (2x sinh t) + 2x sinh tK00 (2x sinh t) π 0 i +( x2 − ν2 )K0 (2x sinh t) cosh (2νt) dt. Using the differential equation for the modified Bessel function and manipulating gives the identity Nν2 ( x ) =

8 π 2 x2

Z ∞h 0

i x2 cosh 2t − ν2 K0 (2x sinh t) cosh(2νt) dt.

(14)

This representation does not appear in standard tables, and to the best of the author’s knowledge it is a new result. With this representation we can now use this to prove the convexity of φ. However, we shall prove the more general inequality " # d x2 2 √ Nν ( x ) < 0, x > ν ≥ 0, (15) dx x 2 − ν2 which will also be used in later sections. This inequality is equivalent to the statement d h 2 2 i (16) x Nν ( x ) − x3 Nν2 ( x ) < 0, x > ν ≥ 0. ( x 2 − ν2 ) dx The first term of which can be modified by manipulating the identity [1, §10.18.10] to give d h 2 2 i d 2 M ( x ), (17) x Nν ( x ) = −( x2 − ν2 ) dx dx ν and the result of Watson [7, pg. 447], derived from Nicholson’s integral, can be used to show Z 8x ∞ n d Mν2 ( x ) = 2 2ν sinh3 t sech t sinh(2νt) ( x 2 − ν2 ) dx π 0 o − sech2 t cosh(2νt) K0 (2x sinh t) cosh(2νt) dt. (18) The second term is expanded with the Nicholson–like integral, Eq. (14). With these substitutions and some manipulation, the left–hand side of Inequality (15) becomes d h 2 2 i x Nν ( x ) − x3 Nν2 ( x ) = dx Z  i 8x ∞ h 2 − 2 x sinh2 t + ( x2 − ν2 ) tanh2 t cosh(2νt) π 0  + 2ν sinh3 t sech t sinh(2νt) K0 (2x sinh t) cosh(2νt) dt.

( x 2 − ν2 )

The inequality (15) therefore holds since the integrand is positive for x > ν.

(19)

Bessel Phase Functions: Calculation and Application

5

To show the convexity of φ for x > ν, we differentiate Equation (7) " # 2 2( x 2 − ν2 ) d x3 00 2 φν ( x ) = − N (x) πx6 Nν4 ( x ) dx x2 − ν2 ν  " # 2 2( x 2 − ν2 )  d x2 x 2 √ √ =− Nν ( x ) πx6 Nν4 ( x )  x2 − ν2 dx x 2 − ν2



ν2 x 2

( x 2 − ν2 )2

 

(20)

Nν2 ( x ) . 

Both terms in the braces are negative, and so φν00 ( x ) > 0 for x > ν > 0.

3 Asymptotic Expansions Asymptotic expansions of the phase and modulus functions for large argument can be derived from the classical expansions of Hankel [8]. These take the form   1 π µ−1 (µ − 1)(µ − 25) θν ( x ) ∼ x − ν + + + +··· , (21) 2 2 2(4x ) 6(4x )3   µ+3 µ2 + 46µ − 63 1 π φν ( x ) ∼ x − ν − + + +··· , (22) 2 2 2(4x ) 6(4x )3 ! 2 1 µ − 1 1 · 3 (µ − 1)(µ − 9) Mν2 ( x ) ∼ 1+ + +··· , (23) πx 2 (2x )2 2·4 (2x )4 ! 1 2 1 µ − 3 ( µ − 1 )( µ − 45 ) − Nν2 ( x ) ∼ 1− +··· , (24) πx 2 (2x )2 2·4 (2x )4 where µ = 4ν2 . Higher order terms can be found in [1, §10.18(iii)]. These expansions can be truncated to provide an approximation to the functions; however, this is only useful when the argument is significantly larger than 4ν2 , which becomes increasingly restrictive as the order increases. To find an approximation which is useful as the order grows large, we should consider asymptotic expansions in this variable instead. Such expansions are available for the Bessel functions in the form of Debye’s expansion and Olver’s uniform expansion, as found in the DLMF [1, §10.19 and §10.20]. Directly transforming these into expansions for the phase and modulus functions is not a simple task. Instead, we derive an equivalent expansion to that of Debye via differential equation theory. This involves transforming Bessel’s differential equation with a Liouville transformation so that the solutions can be closely approximated by elementary functions when the order is large. Olver [9, Chap. 10] gives an accessible and thorough presentation of the technique. To begin with, we note that it is possible to manipulate Olver’s transformed Bessel equation [9, pg. 374] to show that the function w ( ξ ) = ( z2 − 1)

1/4

Cν (νz),

(25)

6

David E. Horsley

where Cν is a linear combination of Bessel functions, satisfies ! d2 w z2 ( z2 + 4) 2 w(ξ ) = 0, + ν + dξ 2 4z2 − 1 with

Z √ 2 z −1

(26)

p

z2 − 1 − arcsec z. (27) z To derive an asymptotic expansion for the phase function from this equation, we follow another note of Olver [9, pg. 366] by seeking a formal solution in the form ! ∞ Em (ξ ) wν (ξ ) = exp iνξ + ∑ , (28) m m=1 (iν ) ξ=

dz =

with a second solution given by ν → −ν. After substitution into Equation (26) and comparing coefficients of equal powers of ν, we find the terms in the formal solution are E1 (ρ) = ρ(5ρ2 + 3)/24,

(29)

En+1 (ρ) = 12 ρ2 (ρ2 + 1) En 0 (ρ) +

1 2

n −1 Z



Em 0 (ρ) En−m 0 (ρ)ρ2 (ρ2 + 1)dρ,

(30)

m =1

√ where we have changed variable to ρ(ξ ) = 1/ z2 − 1. To find the phase and modulus functions in terms of this expansion, we write the Hankel function, at least in a formal sense, as a linear combination of these solutions: √ (1) zHν (νz) = c1 wν (ξ ) + c2 w−ν (ξ ). (31) To find the coefficients, we match this series with the classical asymptotic expansions for large argument: r √ (1) 2 i(νz−νπ/2−π/4) zHν (νz) ∼ e , (32) π √ as z → ∞. We then identify the constants in Equation (31) as c1 = 2/πe−iπ/4 and c2 = 0. Taking the imaginary part of the argument to the exponential in Eq. (28) then gives us the desired expansion of the phase function: ! ∞ π Em (ξ ) θν ( x ) ∼ = −i + iνξ + ∑ m 4 m=1 (iν )

∼−

∞ π E −1 ( ξ ) + νξ + ∑ (−1)n 2n2n 4 ν −1 n =1

(33)

Or, in the original variables, the first terms of this expansion are: θν ( x ) ∼ −

π p 2 + x − ν2 − ν arcsec 4

+

x ν



3x2 + 2ν2 24( x2 − ν2 )

3/2

375x6 + 3654x4 ν2 + 1512x2 ν4 − 16ν6 5760( x2 − ν2 )

9/2

+...

(34)

Bessel Phase Functions: Calculation and Application

7

for x > ν and as ν → ∞. We can find an expansion for the phase function of the Bessel function derivatives φν ( x ) using the same technique. This time, we start with the differential equation for the derivatives [1, 10.13.7]   2 2 2 00 2 0 2 2 2 2 2 x ( x − ν )v + x ( x − 3ν)v + ( x − ν ) − ( x + ν ) v = 0, (35) where v( x ) is a linear combination of derivatives of Bessel functions. As with the Bessel functions case, it is convenient to work in the variable z = x/ν and to eliminate the coefficient of the second derivative. This gives the equation 2

z2 − 3 dv ν2 ( z2 − 1) − ( z2 + 1) d2 v + + v = 0. dz2 z(z2 − 1) dz z2 ( z2 − 1)

(36)

The first derivative term can be eliminated with change of dependent variable v(z) = R V (z) exp (− 21 f (z) dz), where f (z) is the coefficient of the first derivative. Integration yields, up to a constant of proportionality, 

1 exp − 2

Z



 f (z) dz

=

x2 − 1 3

x2

(37)

and V (z) then satisfies d2 V + dz2

ν

2z

2

− 1 3z4 + 10z2 − 1 − 3 z2 4( z2 − 1)

! V = 0.

(38)

To put this equation in a form susceptible to a substitution of the type given in Equation (28), we again follow Olver by applying a Liouville transformation. This changes the dependent variable to p(z) = (ξ (z))1/2 V (z), which satisfies d2 p + dξ 2

ν

2z

2

−1 2 z˙ + ψ(z) z2

(39)

! p(ξ ) = 0,

(40)

where z˙ denotes differentiation with respect to ξ and ψ = z˙ 2 g(z) − 12 {z, ξ }, where g(z) is the second term in parentheses in Equation (38) and 1

{z, ξ } = −2z˙ 2

  ... z d2 − 1 3 z¨ 2 2) = ˙ ( z − , z˙ 2 z˙ dξ 2

(41)

is the Schwarzian derivative. As with the previous case, √the appropriate choice of ξ to transform this equation into harmonic type is ξ = z2 − 1 − arcsec z and, with some manipulation, we find the resulting equation is ! d2 p 3z4 + 16z2 + 2 2 + ν + p(ξ ) = 0, (42) 3 dξ 2 4( z2 − 1)

8

David E. Horsley

solutions to which are related to linear combination of first derivatives of Bessel functions by 1

( z2 − 1) 4 = p ( ξ ). z Again, to find an expansion for the phase function, we look for a formal solution of the form ! ∞ Fm (ξ ) . (43) pν (ξ ) = exp iνξ + ∑ m m=1 (iν ) Cν0 (νz)

This gives the same recurrence relation as in Equation (30), but instead with the initial value F1 (ρ) = −ρ(7ρ2 + 9)/24,

(44) −1/2

which again are polynomials of degree 3n in ρ = (z2 − 1) . The actual phase function can again be found by matching with the classical asymptotic expansions as z → ∞, giving ∞ π F −1 ( ξ ) φν ( x ) ∼ + νξ + ∑ (−1)n 2n2n , (45) 4 ν −1 n =1 or in terms of the original variables φν ( x ) ∼

π p 2 + x − ν2 − ν arcsec 4



x ν

+

9x2 − 2ν2 24( x2 − ν2 )

945x6 + 4986x4 ν2 + 1368x2 ν4 + 16ν6 5760( x2 − ν2 )

9/2

3/2

+...

(46)

for x > ν and ν → ∞. Truncating these two expansions provides a far better approximation to the phase functions over the classical expansion in (21)–(22), even when the order is small. In fact, the leading term is sufficiently close for certain numerical work, as we shall show later. While these Deybe type expansions can provide a good approximation for large order of argument, near the turning point z = x/ν = 1, the higher order terms in these expansions for the phase functions diverge too quickly to provide a method for direct high accuracy evaluation. In Section 5, we will present a method to calculate the phase functions in this region with the aid of numerical routines for the Bessel functions, but it also seems of interest to investigate the other well known expansion for the Bessel functions, which provides a good approximation in this region. The uniform asymptotic expansion of Olver [10] — also discused in detail in [9, Chap. 10] and [1, §10.20], expresses the Bessel functions in terms of a series involving Airy functions. This series, when truncated and evaluated, can provide a highly accuracy approximations, — even near the turning point. While we have not had success in deriving the general term of the series in phase function, it is straightforward to show that the first order term is π + θ A (ν2/3 ζ ) + O(ν4/3 ), 2 π φν (νz) = − + φ A (ν2/3 ζ ) + O(ν2/3 ), 2 θν (νz) =

(47) (48)

Bessel Phase Functions: Calculation and Application

9

where !

θ A ( x ) = arctan

Ai( x ) Bi( x )

φ A ( x ) = arctan

Ai0 ( x ) Bi0 ( x )

,

(49)

! ,

(50)

are the phase functions of the Airy functions, Ai and Bi, and their derivatives, as described in [1, §9]. The arctangent function takes the branch which ensures continuity with the constraint θ A (0) = −φ A (0) = π/6. The function ζ (z), is given by: p 2 (−ζ )3/2 = z2 − 1 − arcsec z, z ≥ 1 3 ! √ p 2 3 1 + 1 − z2 2 ζ = ln − 1 − z2 , 3 z

(51) 0 0, since K0 is a decreasing function, and so this function is negative and finite for x > 0. The difference between the phase functions therefore remains in the same quadrant, that is   1 π n− < φν ( x ) − θν ( x ) < nπ 2 for some n ∈ Z. To find this n, we note that, as x → ∞   φν ( x ) − θν ( x ) → x − (ν/2 − 1/4)π − x − (ν/2 + 1/4)π + O( x −1 ) π = + O ( x −1 ), 2

10

David E. Horsley

so we can conclude n = 1, i.e π < φν ( x ) − θν ( x ) < π. 2

(54)

To find a bound on θν , we follow Watson [7, pg. 446] by noting that since  d p 2 x − ν2 Mν2 ( x ) > 0, dx and, from Eq. (23), lim

p

x →∞

x2 − ν2 Mν2 ( x ) =

2 , π

then

2 . π We can now use this to find a lower bound on θ 0 ( x ) by manipulating √ x 2 − ν2 2 < = θν0 ( x ), x πxMν2 ( x ) p

x2 − ν2 Mν2 ( x )
ν. It is useful to note that, since θν0 ( x ) > 0 we have π θ ν ( ν ) > θ ν (0) = − . 2 We can use a similar line of reasoning to find an upper bound for φν ( x ) using the new identity in Eq. (15). Since # " d x2 2 √ Nν ( x ) < 0, dx x 2 − ν2 and, using Eq. (24), lim √

x →∞

when x > ν ≥ 0, we have



x2 x2

− ν2

x2 x2

− ν2

Nν2 ( x ) =

Nν2 ( x ) >

2 , π

2 , π

(58)

Rewriting in terms of the phase function gives φν0 ( x )

2( x 2 − ν2 ) < = πx3 Nν2 ( x )



x 2 − ν2 , x

and integration gives φν ( x )
ν > 0. Again, we can find a less tight but more useful bound noting that φν0 ( x ) < 0 for x < ν, which means φν (ν) < φν (0) = π/2.

Bessel Phase Functions: Calculation and Application

11

Combining these together shows that the phase functions satisfy, for x > ν, ξ ν ( x ) + θν (ν) < θν ( x ) < φν ( x ) < ξ ν ( x ) + φν (ν)

(60)



where, from the previous section, ξ ν ( x ) = x2 − ν2 − ν arcsec ( x/ν). This inequality bounds the error of the approximation attained by truncating the Deybe expansions to first order. That is

and

θν ( x ) − (ξ ν ( x ) − π/4) < φν (ν) + π/4 < 3π 4

(61)

φν ( x ) − (ξ ν ( x ) + π/4) < θν (ν) − π/4 < 3π . 4

(62)

In the next section we will present a novel use for this error bound.

5 Calculating the Phase Functions We now use the results of the previous sections to formulate a method of calculating the Bessel phase functions. We assume routines for calculating the Bessel functions, for example those given by Amos [11], which are fast and widely available. With such routines, along with some for the inverse trigonometric functions, the phase functions can be calculated numerically modulo 2π via θν ( x ) ≡ arg( Jν ( x ) + iYν ( x ))

(mod 2π ),

(63)

and likewise for φ. Here, arg(z) denotes the principle argument or phase of the complex number z. Equation (63) can be implemented in real arithmetic using the “atan2” function common in libraries of numerical routines. When the argument is 0 < x < ν, the phase functions do not leave the range (−π, π ) since neither the Bessel functions nor their derivatives have roots less than their order ν. In this region, we can therefore guarantee equality in Equation (63), rather than just congruence. For x > ν, however, this calculation will be off by an unknown multiple of 2π. This constant can be resolved by counting the number of roots of the Bessel function of the second kind less than the argument, but this is potentially costly, especially for large values. Instead, we use the asymptotic expansions derived earlier as approximations and then recover the full phase using standard numerical routines. For a moment, let’s suppose we have an approximation θˆν to the function θν which has error ε( x ) = θν ( x ) − θˆν ( x ) that satisfies |ε( x )| < π for x > ν. That is, the error identically satisfies ε( x ) = mods(θν ( x ) − θˆν ( x ), 2π ),

(64)

where mods( x, y) = x − y round( x/y) is the symmetric modulo function — taking values in the range (−π, π ]. Now, since the modulo function satisfies the identity mods(w + x, y) = mods(mods(w, y) + mods( x, y), y),

(65)

12

David E. Horsley

and we can calculate mods(θν ( x ), 2π ) using Eq (63), the error ε can be calculated exactly. We can then calculate the phase function with θν ( x ) = θˆν ( x ) + mods(arg( Jν ( x ) + iYν ( x )) − θˆν ( x ), 2π )

(66)

We showed in the previous section, with Inequalities (61)–(62), that the first order truncation of the Debye type asymptotic expansions satisfy the necessary error condition. To calculate the total phase for any positive real argument, we simplify evaluate (66), with (√ x2 − ν2 − ν arcsec( x/ν) − π/4 x > ν, θˆν ( x ) = (67) 0 0 < x ≤ ν, and likewise for φ. An implementation of this method is available on the M ATLAB Central File Exchange [12]. In the next sections, we will show some applications of this method.

6 Applications to root finding Now that we have a method for calculating these phase functions, we can them to create practical numerical methods for finding the roots of linear and certain nonlinear combinations of Bessel functions or their derivatives. For instance, if we define

Cν ( x ) = Jν ( x ) cos(πt) + Yν ( x ) sin(πt) = Mν ( x ) cos(θν ( x ) − πt)

(68)

h  for some t ∈ − 21 , 12 , then the (real) roots of Cν ( x ) can be found by solving  θν ( x ) =

k+t−

1 2

 π

(69)

for some integer k. Since the phase function θν is monotone increasing and θν (0) = −π/2, the first root occurs when ( 0 t > 0, k= (70) 1 t ≤ 0. We see that the mth real root of Cν ( x ), in ascending order, is the unique solution to   1 θν ( x ) = m + t − dte − π (71) 2 where we ignore the “trivial” root at x = 0 for the Bessel functions of the first kind (t = 0). Derivative-based root finding methods, such as Newton’s, are particularly suited to this problem as the derivatives of the phase functions can be calculated cheaply from Equations (6)–(7). Furthermore, since θν is monotone and convex, as shown in an earlier section, solving Equation (69) with Newton’s method is guaranteed to

Bessel Phase Functions: Calculation and Application

13

5 π 2

phase

3 π 2

1 π 2

− 12 π

0

2

4

x

6

8

10

Fig. 1 Bessel phase functions θν ( x ) (solid) and φν ( x ) (dashed) for ν = 2.

converge for any positive initial guess — at least in exact arithmetic. (See [1, §3.8 (ii)], for example.) For an initial starting point for Newton’s method, McMahon’s expansion can be used. This expansion can de derived by inverting the classical asymptotic expansions Equations (21)–(22), which gives to first order in m−1 ,   1 3 x ≈ m−t+ ν− π. (72) 2 4 While Newton’s method is generally a good choice of root finding method for this problem, as can be seen in Figure 1, when t → −1/2 the first root is somewhat ill–conditioned, in the sense that small variations in t create large variations in the root. In this case it is more appropriate to use some kind of bracketed method, such as bisection or Ridders’ method [13]. We can also construct a method for finding the roots of a linear combination of derivatives of Bessel Functions. Again, with a linear combination in the form

C 0 ν ( x ) = Jν0 ( x ) cos(πt) + Yν0 ( x ) sin(πt) = Nν ( x ) cos(φν ( x ) − πt), the roots of C 0 ν ( x ) correspond to the solutions of   1 φν ( x ) = j + t − π 2

(73)

(74)

for some integer j. Unlike the previous case, however, φν is not a monotone function on its whole domain and so there is not always a unique solution to Eq (74) for each j. Instead, the function is monotone decreasing on the interval (0, ν], and monotone increasing on [ν, ∞). With this information, and the aid of Figure 1, we can resolve the general case as follows: – For t > 0, the roots of C 0 ν ( x ) correspond to the unique solution of (74) with j ≥ 1, ie. the mth root corresponds to j = m.

14

David E. Horsley

– For φν (ν)/π − 12 < t ≤ 0, there are two root of C 0 ν ( x ) corresponding to j = 1 in Equation (74); one less than ν, one larger than ν. All other roots correspond uniquely to a j in (74). That is, mth root in ascending order corresponds to j = m − 1 for m > 1. – When t = φν (ν)/π − 21 , the two roots corresponding to j = 1 coalesce into a double root of C 0 ν ( x ) at x = ν. Again, all other roots correspond to a unique j, and the mth in order is found by solving (74) with j = m. – For t < φν (ν)/π − 21 , the double root vanishes, and the first root corresponds to j = 2 and is larger than ν. The mth root then corresponds to j = m + 1. Again, Newton’s method provides a good method for solving Equation (74) for a root larger than ν, and convergence can be guaranteed by the monotonicity and convexity properties given in Section 2. For the root less than ν, when it exists, it is best to use a bracketing method starting with the interval (0, ν). We can also use McMahon’s expansion for an initial approximation to start Newton’s method, which takes the form  x≈

 1 j − t + ν − 1 π, 2

(75)

with j determined by the above. Alternatively, for the roots near ν, we can expand the function φν ( x ) as a Taylor series about the minimum at x = ν and solve (74), giving ! r πNν (ν) 1 x ≈ ν 1± √ j + t − − φν (ν)/π . (76) 2 2

6.1 Implementation Using the above results, we have constructed a routine to calculate the roots of the linear combinations of Bessel functions. We use McMahon’s asymptotic expansion for an initial approximation and refine this using Newton’s method on the phase functions. The implementation can be found on the M ATLAB Central File Exchange [14]. For the roots of the Cylinder functions (68), and their derivatives, Newton’s method take the form of a relative update  x n +1 = x n 1 − g ( x n ) ,

(77)

where, for the roots of the Cylinder functions, g( x ) = (θν ( x ) − q(ν, m, t)) M ( x )2

π , 2

(78)

and for the roots of the Cylinder function derivatives g( x ) = (φν ( x ) − p(ν, m, t))

N ( x )2 π . 2( x 2 − ν2 )

(79)

Bessel Phase Functions: Calculation and Application

15

Here, q(m) and p(m) are the values attained by the phase functions at the mth root, that is   1 q(ν, m, t) = m + t − dte − π (80) 2   1 π, (81) p(ν, m, t) = j + t − 2 with the integer function j determined by the conditions above. To terminate the iteration, we stop when the relative update, g( xn ), is less than twice machine-e — the upper bound on the relative error due to rounding of the working floating point implementation. If the method for calculating the Bessel functions is sufficiently accurate and arithmetic operations are performed in IEEE 754[15] arithmetic, we can safely assume this will be attained. Since all roots, with the potential exception of one, are larger than ν, the fact that M( x )2 π2 is less than one for x > ν > 1.417 . . . helps to dampen numerical errors collected from the operations involved. In Table 1 we show the approximations to the first 10 roots of the Cylinder functions of order ν = 0 with t = −1/4, 0, 1/4, 1/2 as calculated by this method. In this table, we also show the error of these values when compared to arbitrary precision calculations performed in Mathematica [16] using the FindRoot function. Since these Newton iterations were seeded with McMahon’s expansion of which the error decreases with increasing root index, the convergence rate is slowest for the small roots; however, it still only takes a maximum 5 iterations to attain near machine precision approximations. As the order ν grows, so does the error in McMahon’s expansion and therefore so does the number of iterations required to attain high precision approximations. Due to the convergence rate of Newton’s method, this growth in the number iterations is only logarithmic: for instance we found that for ν = 1000, the worst case roots required 9 iterations to attain the tolerance, with larger roots taking fewer iterations. We could achieve fewer iterations by using a better initial approximation to seed Newton’s method in the high-order case; take [1, §10.21(vii)] for example. However, in the sample implementation, we have opted for simplicity over speed. The fact that the method still converges for such an inaccurate initial approximation also demonstrates the robustness of the general scheme.

7 Cross-Product Phases The phase functions can also be used for finding the roots of the Bessel function cross-products. These arise in the characteristic equation of the Sturm–Liouville problem of Bessel’s equation with boundary conditions of Dirichlet or Neumann type applied at either endpoint of the interval [1, λ]. The roots of these functions are therefore important, for example, in the study of electromagnetic waves or fluid flow inside coaxial cylinders and to heat transfer through a spherical shell. The four choices of the two boundary conditions applied to the Sturm–Liouville problem give the functions i,j

(i )

( j)

(i )

( j)

f ν ( x; λ) = Yν (λx ) Jν ( x ) − Jν (λx )Yν ( x ),

(82)

16

David E. Horsley t = −1/4

t=0

4: 3: 3: 3: 3: 3: 3: 3: 3: 3:

4: 3: 3: 3: 3: 3: 3: 3: 3: 3:

1.638559910293841(1) 4.738225945999220(3) 7.869737906241316(0.1) 11.006883217523516(-20) 14.145980730796316(-2) 17.285978417063514(-7) 20.426464179687105(3) 23.567243928540567(-20) 26.708214361593583(-20) 29.849315470627065(-50)

2.404825557695773(-2) 5.520078110286311(-4) 8.653727912911013(-8) 11.791534439014281(0.6) 14.930917708487785(0.9) 18.071063967910920(3) 21.211636629879258(1) 24.352471530749305(-2) 27.493479132040257(-2) 30.634606468431972(3)

t = 1/4

t = 1/2

5: 4: 3: 3: 3: 3: 3: 3: 3: 3:

4: 4: 3: 3: 3: 3: 3: 3: 3: 3:

0.230330781153666(-0.1) 3.179288988922246(-5) 6.302775737106891(-7) 9.437947508934144(-2) 12.576277867544770(-2) 15.715900435924057(1) 18.856175424926327(0.5) 21.996825136574962(0.7) 25.137709759709669(2) 28.278751294918667(0.5)

0.893576966279167(5) 3.957678419314858(-1) 7.086051060301774(-10) 10.222345043496418(-1) 13.361097473872766(-3) 16.500922441528090(0.8) 19.641309700887941(-1) 22.782028047291558(1) 25.922957653180923(-3) 29.064030252728397(1)

Table 1 The first 10 roots of the cylinder functions with order ν = 0. The leading digit in each entry indicates the number of iterations required to attain the root and the trailing digits in parentheses indicate the error in the first truncated digit. For example an entry “3: 4.31 (4)” would indicate the algorithm took 3 iterations to attain the root 4.31 which has an absolute error of 4 × 10−3 .

where Jν and Yν denote the Bessel functions of the first and second kind, and the bracketed superscripts denote derivatives with respect to the arguments of the function. We will only consider the common case where i and j are either 0 or 1 — the cross-products of the Bessel functions and their derivatives — and parameters λ and ν are fixed. The roots for varying ν are also of interest, but not sought here. For convenience in this subsection we use a different notation for the phase and modulus functions, in the form ( j)

( j)

j

j

Jν ( x ) + iYν ( x ) = Mν ( x ) exp(iθν ( x )).

(83)

With these functions, the cross-product functions of Eq. (82) take the form   i,j j j f ν ( x; λ) = Mνi (λx ) Mν ( x ) sin θνi (λx ) − θν ( x ) , i,j

i,j

= Mν ( x; λ) sin θν ( x; λ). i,j

i,j

(84) (85)

Here θν ( x; λ) is the cross-product phase, and Mν ( x; λ) is the cross-product modulus. We will typically drop the λ when it is assumed to be a fixed parameter. Figure 2 shows the general features of these phase functions. It is clear that the roots of these functions occur when the cross-product phase passes through integer multiples of π, but to use the phase functions to construct a practical root finding method we need to know at which multiple of π a particular root occurs. Akin to the results for the regular phase functions, we now derive some

Bessel Phase Functions: Calculation and Application

17

30

phase/π

20

10

0 0

5

10

15 x

20

25

30

i,j

Fig. 2 Total phase for the cross product functions θν ( x; λ) with order ν = 20, radius ratio λ = 2, and boundary condition from top to bottom (i, j) = (1, 0) (dotted), (0, 0) (solid), (1, 1) (dashed), (0, 1) (dot-dashed).

monotonicity results for these cross-product phases. Some of these properties can be found by noting the following simple results: Lemma 1 If f ( x ) is a differentiable, monotone increasing function with a positive piecewise continuous second derivative on some interval [ a, b], and λ > 1 is a fixed constant; then g( x ) = f (λx ) − f ( x ) is a monotone increasing function on the, possibly empty, interval [ a, b/λ]. Proof Notice g0 ( x ) = λ f 0 (λx ) − f 0 ( x )

(86)

= (λ − 1) f 0 (λx ) + f 0 (λx ) − f 0 ( x ) = (λ − 1) f 0 (λx ) +

Z λx x

f 00 (t) dt

≥ 0.

(87) (88) (89)

From the monotone theorems for the phase functions, Theorems 1 and 2, we can see that θν0,0 ( x ) and θν1,1 ( x ) are monotone increasing for x > 0, ν > 1/2 and x > ν, ν > 0, respectively. The bounds on the parameters are only sufficient conditions for monotonicity of these functions. We can improve upon these bounds by other methods; however, we must treat each cross-product phase function individually, as in the following theorem. i,j

Theorem 3 θν ( x; λ) is a monotone increasing function of x for 1. i = j = 0 and x > 0, 2. i = 0, j = 1 and x > 0, 3. i = j = 1 and x > ν/λ, The bound on x in the last case is sufficient but not necessary.

18

David E. Horsley

Proof For case 1, we use Equation (6) to find h i dθν0 0 2 2 2 M ( x ) − M ( λx ) . = ν ν dx πxMν2 (λx ) Mν2 ( x )

(90)

Now, Nicholson’s integral, Eq .(8), shows that Mν2 ( x ) is a monotone decreasing function of x and therefore the above term is positive. For case 2, Equations (6) and (7) give     2 dθν0 1 2 2 2 2 2 = x N ( x ) − x − ν M ( λx ) . ν ν dx πx3 Mν2 (λx ) Nν2 ( x )

(91)

When x ≤ ν all terms are positive, so we only need to consider x > ν. Using Nicholson’s integral, and the equivalent for the modulus of the Bessel derivatives, Equation (14), we have   x2 Nν2 ( x ) − x2 − ν2 Mν2 (λx ) Z ∞  8 x2 cosh 2t − ν2 K0 (2x sinh t) = 2 2 π x 0    − x2 − ν2 K0 (2xλ sinh t) cosh(2νt) dt.

(92)

Since we are considering x > ν, the second term is negative, and since K0 is monotone decreasing,   x2 Nν2 ( x ) − x2 − ν2 Mν2 (λx ) ≥  Z ∞ 8 2 x cosh 2t − 1 K ( 2x sinh t ) cosh(2νt) dt. ( ) 0 π 2 x2 0

(93)

The lower bound is greater than zero since the integrand is non-negative, and so we have the desired result. For case 3, as we stated earlier, Lemma 1 shows θν1 1 ( x ) is monotone for x > ν. To prove monotonicity in the region ν/λ < x < ν, we only need to note that   !   2 ν dθν1 1 2  x2 − = Nν2 ( x ) − x2 − ν2 Nν2 (λx ) , dx πx3 Nν2 (λx ) Nν2 ( x ) λ2

(94)

and see that all terms are positive for this interval. The properties of θν1,0 are somewhat more complex. For almost all parameters, θν1,0 ( x ) has a turning point in x which x-position increases without bound as λ → 1+ . However, it is clear the function is monotone for large argument and numerical experimentation suggests the function is in fact monotone after the first root of the corresponding cross-product. In lieu of a proof of this property, however, we must settle for the following weaker result:

Bessel Phase Functions: Calculation and Application

19 i,j

Theorem 4 The mth root of the Bessel function cross-product f ν ( x; λ) of Equation (82), corresponds to the unique solution of i,j

θν ( x; λ) = π (m − ∆),

(95)

where ∆ = δj,1 (1 − δi,1 δν,0 ) and δn,m is the Kronecker delta, taking the value 1 when the subscripts are equal and zero otherwise. Proof For the cases where i = j, or i = 0, j = 1, this follows directly from the monotonicity properties above, the fact that θν ( x ) > 0 and φν ( x ) > − π2 (1 − δν,0 ). For the f ν1,0 case, however, we must use a less direct argument to prove the theorem. First we note that this phase function satisfies θν1,0 ( x ) >

π 2

(96)

as follows from the inequality (54) and the monotonicity of θν ( x ). Since the function θν1,0 is unbounded and continuous, there is at least one solution θν1,0 ( x ) = mπ for every m ∈ N. Now we prove this is unique. From the classical asymptotic expansion for the phase functions, Eqs. (21)–(22), we have, as x → ∞ θν1,0 ( x ) ∼

π + ( λ − 1 ) x + O ( x −1 ), 2

(97)

and so the roots are asymptotic to x ∼ π (m − 1/2)/(λ − 1), and there is an m0 sufficiently large such that   mπ 1,0 mπ < θν < ( m + 1) π (98) λ−1 for all m > m0 . So there are at least m roots less than x = mπ/(λ − 1), each corresponding to a point where θ 1,0 passes through one of π, 2π, . . . , mπ. Now if we show that there are only m roots less that x = mπ/(λ − 1), for all m > m0 , we have the result. Cochran [17] gives an argument which we can adjust for this purpose. This involves extending f ν1,0 to the complex plane and using Cauchy’s argument principle to count the number of roots inside a sufficiently large contour. The continuation of f ν1,0 to complex arguments follows directly from that of the Bessel functions which gives f ν1,0 as an analytic function of z ∈ C \ {0} with a simple pole at z = 0. We can use analytic continuation formulae of the Bessel functions [1, §10.11], to show that the function f ν1 0 only has one branch and f ν1,0 (−z) = − f ν1,0 (z). Sturm–Liouville theory shows that all the roots are real are symmetric about zero. Therefore, since f ν1,0 has a simple pole at the origin, so to show there are m real positive roots less than mπ/(λ − 1), it is sufficient to show that 1 2πi

d f ν1,0 dz = 2m − 1, f ν1,0 (z) dz 1

I C

(99)

where C = {z : |z| = mπ/(λ − 1)}, transversed once in a positive direction. Using the classical asymptotic expansions for the Bessel functions and their derivatives [1, §10.17], it is found that for |z| → ∞,

20

David E. Horsley

  2 1 √ cos(λ − 1)z − K sin(λ − 1)z + O(z−2 ) z zπ λ 2 n √ = (λ − 1) sin(λ − 1)z zπ λ o   1 + 1 + (λ − 1) K cos(λ − 1)z + O(z−2 ) z

f ν1 0 (z; λ) = − d f ν1 0 dz

(100)

(101)

where, for convenience, we have defined K = [4ν2 + 3 − λ(4ν2 − 1)]/8λ. While the asymptotic expansion of the Bessel functions used here only holds in | arg z| < π, the fact that f ν1 0 is odd allows this restriction to be removed from equations (100) and (101). As cos(λ − 1)z is nonzero on C, we can safely divide by f ν1 0 to find 1

d f ν1 0 1 0 f ν dz

 1 = −(λ − 1) tan(λ − 1)z − 1 + (λ − 1)K z

(102)

(tan (λ − 1)z)2 −(λ − 1)K + O ( z −2 ). z

Inside the curve C, (λ − 1) tan (λ − 1)z has 2m simple poles of residue −1 and tan2 w 4 =− . 2 1 w π (2n + 1)2 w=(n+ 2 )π

(103)

Res

Equation (99) therefore evaluates to 1 2πi

I C

1 d f ν1 0 dz ∼ 2m − 1 + (λ − 1)K f ν1 0 dz

4 −1 + 2 π

m −1



n=−m

!

1

(2n + 1)

2

+ O ( m −2 ).

(104) It follows from the famous result of Euler, given in [18, formula 0.234.2] for example, that m −1 m −1 π2 1 1 (105) ∑ (2n + 1)2 = 2 ∑ (2n + 1)2 = 4 + O(m−1 ), n=−m n =0 where we find the bound on the remainder by integration. The bracketed term in Eq. (104) is therefore of order m−1 . We can make the O(m−1 ) terms less than one, by choosing a sufficiently large m. But Equation (99) must be an integer, so there exists some m00 large enough such that 1 2πi

I C

1 d f ν1 0 dz = 2m − 1, f ν1 0 dz

(106)

for all m > m00 and the result follows. We can now create a numerical method for finding the roots based on these results. The derivatives of the cross-product phases are easy to calculate, and so it is worth using a root finding method which takes advantage of this. Newton’s method again seems like a good choice, but unlike the linear combinations case, we cannot guarantee convergence as the functions are not always convex. This is only a problem near changes in convexity, which occur near x = ν.

Bessel Phase Functions: Calculation and Application

21

7.1 Implementation As with the Cylinder functions, we have constructed a routine to calculate the roots of the cross products using the results described. Again, we use McMahon’s asymptotic expansion for an initial approximation, but in this case we use two different forms. For roots larger than the order ν, we use the expansion for large crossproduct roots given in [1, §10.21(x)], and for roots less than ν, we use the expani,j sion for the roots of Jν (λx ) = 0 found in [1, §10.21(vi)]. We use the value of θν (ν) and (95) to determine which region the mth root is in. We again update this initial guess with Newton’s method on the phase functions, which also takes the form of a relative update and we stop the iteration when the update is smaller than twice machine-e. The implementation can also be found on the M ATLAB Central File Exchange in the same package [14]. As an illustration, In Table 2 we show the roots of the cross product functions with order ν = 1 and radius-ratio λ = 2. Again, we compare the error of these numerical values to those attained with 30 digit precision in Mathematica [16] using the FindRoot function. The method can be seen to give high accuracy results — typically correct to the last decimal place shown. Again, due to the nature of the initial approximations used, the earlier roots require more iterations, the number of which increases logarithmically with order. We have tested this implementation over a wide range of parameter values to great success.

(i, j) = (0, 0)

(i, j) = (0, 1)

4: 3: 4: 3: 3: 4: 3: 3: 3: 3:

5: 4: 4: 5: 4: 3: 3: 3: 3: 4:

3.196578380810636 (-9.4) 6.312349510373263 (1.2) 9.444464925482276 (-32 ) 12.581202810104109 (-5.7) 15.719854269429742 (-3.2) 18.859476620138391 ( 2.2) 21.999658021217325 ( 0.8) 25.140190406879562 (-6.7) 28.280957458331748 (-1.0) 31.421889098157511 (-2.6)

1.958510605499613 ( 1.2) 4.857021628611102 (12.0) 7.941288451978208 (-8.6) 11.058021551998911 ( 0.6) 14.185762076051644 ( 8.6) 17.318529180112257 (15.0) 20.454008150396199 ( 7.8) 23.591115874989690 (13.0) 26.729278123198714 (52.0) 29.868162161529249 (-3.6)

(i, j) = (1, 0)

(i, j) = (1, 1)

4: 3: 3: 3: 3: 3: 3: 3: 3: 4:

7: 4: 5: 3: 3: 3: 3: 3: 4: 3:

1.486285662229119 (-5.0) 4.697208740971374 (-8.1) 7.845595357833206 ( 9.0) 10.989732279161668 (-2.5) 14.132671317287626 (-1.2) 17.275101446206580 ( 2.7) 20.417266706749061 (0.01) 23.559276115352063 ( 2.8) 26.701185903198414 ( 5.5) 29.843028083313875 (-4.7)

0.677336005136584 (-2.0) 3.282471191161380 (-5.0) 6.353211168548725 (-19) 9.471329653051919 (-17) 12.601243520141521 (-2.0) 15.735845516773166 (-0.2) 18.872783634836040 (-3.9) 22.011054105962018 (-1.2) 25.150156308925947 (-3.8) 28.289812567383688 (-3.0)

Table 2 The first 10 roots the Bessel Function cross-products with order ν = 1 and radius ratio λ = 2. As in Table 1 the leading digit in each entry indicates the number of iterations required to attain the root, digits in parentheses indicate the error in the first truncated digit.

22

David E. Horsley

8 Conclusion We have presented some new properties of the Bessel phase functions and translated some known asymptotics expansions of the Bessel functions into ones for these functions. With some new inequalities, we proved that these asymptotic expansions provide a sufficiently accurate approximation that enables calculation of the phase functions when combined with standard routines for Bessel functions. We have shown some applications of these functions to some root finding problems and demonstrated the utility of these functions to the field. Since the method for calculating the phase functions derived here relies on existing routines for the Bessel functions, it raises the question if there is a more direct way to calculate these functions. Such a technique may be possible using the Airy type expansions derived by Olver [9]. We are interested to find further applications of these functions now that an efficient method for their calculation is available. Acknowledgements: I would like to thank Prof. L.K. Forbes for providing some helpful comments on the manuscript, as well as the two anonymous referees useful remarks and insights. I am particularly grateful to the referee who suggested the works of Segura. This work was supported by an Australian Postgraduate Award at the University of Tasmania.

References 1. NIST Digital Library of Mathematical Functions, URL http://dlmf.nist.gov/, Release 1.0.13 of 2016-09-09, online companion to [2]. 2. F. W. J. Olver, D. W. Lozier, R. F. Boisvert, C. W. Clark (Eds.), NIST Handbook of Mathematical Functions, Cambridge University Press, New York, NY, 2010, print companion to [1]. 3. J. McMahon, On the roots of the Bessel and certain related functions, The Annals of Mathematics 9 (1) (1894) 23–30. 4. J. Segura, A global Newton method for the zeros of cylinder functions, Numerical Algorithms 18 (3) (1998) 259–276. doi:10.1023/A:1019125616736. 5. E. Sorolla, M. Mattes, Globally convergent algorithm to find the zeros of the cross-product of Bessel functions, 2011 International Conference on Electromagnetics in Advanced Applications 1 (3) (2011) 291–294. doi:10.1109/ICEAA.2011.6046305. 6. J. Segura, Computing the complex zeros of special functions, Numerische Mathematik 124 (4) (2013) 723–752. doi:10.1007/s00211-013-0528-6. 7. G. Watson, A Treatise on the Theory of Bessel Functions, 2nd Edition, Cambridge Mathematical Library, Cambridge University Press, 1966. 8. H. Hankel, Cylinderfunktionen erster und zweiter Art, Mathematische Annalen 1 (3) (1869) 467–501. 9. F. W. J. Olver, Asymptotics and Special Functions, Academic press, 1974. 10. F. W. J. Olver, The asymptotic expansion of Bessel functions of large order, Philosophical Transactions of the Royal Society of London. Series A, Mathematical and Physical Sciences 247 (930) (1954) 328–368. 11. D. E. Amos, Algorithm 644: A Portable Package for Bessel Functions of a Complex Argument and Nonnegative Order, ACM Transactions on Mathematical Software 12 (3) (1986) 265–273. doi:10.1145/7921.214331. 12. D. E. Horsley, Specialphase: Phases of special functions, MATLAB Central File Exchange (2016) . URL http://www.mathworks.com/matlabcentral/fileexchange/57582-specialphase 13. C. Ridders, A new algorithm for computing a single root of a real continuous function, IEEE Transactions on Circuits and Systems, 26 (11) (1979) 979–980.

Bessel Phase Functions: Calculation and Application

23

14. D. E. Horsley, Specialzeros: Zeros of special functions, MATLAB Central File Exchange (2016) . URL http://www.mathworks.com/matlabcentral/fileexchange/57679-specialzeros 15. IEEE standard for binary floating-point arithmetic, Institute of Electrical and Electronics Engineers, New York, 1985, note: Standard 754–1985. 16. Wolfram Research, Mathematica Edition: Version 7.0 (2008). 17. J. A. Cochran, Remarks on the zeros of cross-product Bessel functions, Journal of the Society for Industrial & Applied Mathematics 12 (3) (1964) 580–587. 18. I. S. Gradshte˘ın, I. Ryshik, Table of Integrals, Series, and Products, 7th Edition, Elsevier Science, 2007.