3D Imaging with Holographic Tomography

5 downloads 0 Views 226KB Size Report
Holographic tomography is a popular way of performing diffraction tomography and there has been ... Keywords: diffraction tomography, holography, biomedical.
3D Imaging with Holographic Tomography Colin JR Shepparda,b,c and Shan Shan Koua,b a

Optical Bioimaging Laboratory, Division of Bioengineering, National University of Singapore, Singapore 117574 b NUS Graduate School in Integrative Sciences and Engineering, Singapore 119597 c Department of Biological Sciences, National University of Singapore, Singapore 117543 Abstract. There are two main types of tomography that enable the 3D internal structures of objects to be reconstructed from scattered data. The commonly known computerized tomography (CT) give good results in the x-ray wavelength range where the filtered back-projection theorem and Radon transform can be used. These techniques rely on the Fourier projection-slice theorem where rays are considered to propagate straight through the object. Another type of tomography called ‘diffraction tomography’ applies in applications in optics and acoustics where diffraction and scattering effects must be taken into account. The latter proves to be a more difficult problem, as light no longer travels straight through the sample. Holographic tomography is a popular way of performing diffraction tomography and there has been active experimental research on reconstructing complex refractive index data using this approach recently. However, there are two distinct ways of doing tomography: either by rotation of the object or by rotation of the illumination while fixing the detector. The difference between these two setups is intuitive but needs to be quantified. From Fourier optics and information transformation point of view, we use 3D transfer function analysis to quantitatively describe how spatial frequencies of the object are mapped to the Fourier domain. We first employ a paraxial treatment by calculating the Fourier transform of the defocused OTF. The shape of the calculated 3D CTF for tomography, by scanning the illumination in one direction only, takes on a form that we might call a ‘peanut’, compared to the case of object rotation, where a diablo is formed, the peanut exhibiting significant differences and non-isotropy. In particular, there is a line singularity along one transverse direction. Under high numerical aperture conditions, the paraxial treatment is not accurate, and so we make use of 3D analytical geometry to calculate the behaviour in the non-paraxial case. This time, we obtain a similar peanut, but without the line singularity. Keywords: diffraction tomography, holography, biomedical optics, three-dimensional microscopy. PACS: 42.30.Rx, 87.57.nf, 42.30.Va

IMAGING IN HOLOGRAPHY People often regard holography as giving a three-dimensional (3D) image, but actually it has been known for many years that the 3D information in a holographic image is far from complete [1]. This is understandable because a holographic image contains only 2N 2 pixels, where N is the lateral width of the image and the 2 accounts for the modulus and phase information. It is difficult to see how complete 3D information could be recorded or stored on the 2D surface of a photographic film (for a thin hologram) or CCD detector. This information is sufficient to record the surface height of a bulk 3D object, but not the detail inside the 3D object. In consequence, a hologram is often described as providing a 2 1/2 D image. It is not desirable to refer to the precision of surface height measurement as “resolution” as resolution refers to resolving the image of two objects, and holography cannot measure the separation of two surfaces. Precision or sensitivity are acceptable descriptions, but accuracy is not, as it means something different: a high precision measurement may be inaccurate. Another case in which a holographic image can provide useful information concerning the 3D structure is if the object is sparse, such as in particle holography. A hologram does contain some 3D information. According to Wolf [1], it contains the spatial frequency information on the surface of a cap of a sphere in Fourier space, the so-called Ewald sphere, introduced for x-ray diffraction studies (Fig. 1). The radius of the sphere is 1 / λ , often normalized to unity. The extent of the cap of the sphere is limited by the angular extent of the holographic detector. The sphere describing the spatial frequency content of the object intersects the origin of Fourier space. This is because, by conservation of momentum, the wave vector of the incident light plus the grating vector is equal to the wave vector of the diffracted light. As the moduli of

the two wave vectors are equal, the end of the grating vector lies on a sphere through the origin. So the cap of the Ewald sphere represents the 3D coherent transfer function (CTF) of the imaging system. The transverse resolution of a hologram is limited by the angular extent of the detector. But different parts of the detector see different views of the object, so that each of these views does not contain the total transverse resolution. Fig. 1 shows the CTF for a transmission-mode holographic system [2]. This corresponds to the real holographic image, the twin image corresponding to another sphere with positive rather than negative longitudinal spatial frequencies. The complete interferogram also has components corresponding to the reference and object beams alone. These different components can be separated by phase-shifting or by using an off-axis geometry. For a reflection-mode system, the corresponding CTF is shown in Fig. 1(d). A band of low longitudinal spatial frequencies are not imaged by the system. k = 2! / " m

|k 1 | = |k 2 |

m

sin # "

k2 K

grating vector

sin ! "

K k2

#

s

!

k1

k1

s

Ewald sphere

(a)

1 – cos # = 2 sin 2# 2 " " (b)

1 – cos ! = 2 sin 2! 2 " " (d)

(c)

FIGURE 1. (a) Conservation of momentum: the wave vector of the incident light plus the grating vector is equal to the wave vector of the diffracted light. (b) Spatial frequencies lie on the surface of the Ewald sphere, giving the coherent transfer function (CTF). m, s are spatial frequencies in the x, z directions. The transverse spatial frequency cut-off is sin α / λ, where α is the semiangular aperture of the hologram. The longitudinal spatial frequency bandwidth is (2/λ) sin2 (α/2). (c) Conservation of momentum for the reflection case. (d) The CTF for the reflection case.

ILLUMINATION AT DIFFERENT ANGLES Holographic imaging is a coherent imaging method. The spatial frequency content of the image can be altered by changing the angle of propagation of the incident wave. So by illuminating with a range of illumination angles, the 3D spatial frequency coverage can be improved. This is the case in confocal interferometry, using a pinhole or single-mode fibre to give confocal optical sectioning. Then images for a range of different illumination angles are summed coherently. The result is that the interference term of the image, which can be extracted using phase shifting techniques, heterodyning or signal processing using Fourier or Hilbert transformation, has a coherent transfer function shown in Fig. 2, given by the convolution of the Ewald spheres representing the illumination and imaging pupils [3]. m

m

2 sin ! "

2

missing cone

2 sin ! " 2

! s

s

2

4 sin ! " 2

FIGURE 2. On the left, the a cross-section through the cut-off of the CTF for a confocal transmission system. On the right, a cross-section through the cut-off of the CTF for a confocal reflection system.

We see that spatial frequency space imaged by the system is filled in by the range of illumination angles. For a transmission system, there is a region of spatial frequencies near to the longitudinal spatial frequency axis that are not imaged. This corresponds to the missing cone of spatial frequencies. This CTF is identical in form to the optical transfer function (OTF) for an incoherent imaging system [4]. For a reflection system, there is a band of low longitudinal spatial frequencies that is not imaged. In both cases, the cut-off for transverse spatial frequencies is twice that for a conventional holography. The value of the CTFs within the pass-band for a high aperture system has been presented elsewhere [5]. Note that if there are only low spatial frequencies present in the object, only the region of the CTF near to the origin is of significance. In this case, the limiting behaviour of diffraction tomography tends to that of computerized tomography (CT) [6]. By the projection-slice theorem, the image of a projection is given by the Fourier transform of the spatial frequencies on a plane through the origin with normal in the direction of the projection. Thus the missing cone corresponds to missing angles in the projections. Returning to the conventional holography case, as in Fig. 1, we see that a projection at a particular angle contains information from one point only in spatial frequency space.

OCT, AND VARIATION OF WAVELENGTH Optical coherence tomography (OCT) is a case of confocal interferometry, although OCT also makes use of a range of illumination wavelengths. An Ewald sphere of different radius, but still intersecting the origin, applies for each wavelength. This process also fills in a band of 3D spatial frequencies. Usually this is achieved using a source of low temporal coherence. The range of longitudinal spatial frequencies imaged depends on the numerical aperture of the optical system. In conventional OCT, the numerical aperture is small, so the coherence effect dominates over the confocal effect. But if a high numerical aperture is used, both mechanisms are present. An image recorded at a single focus position does not contain the entire 3D spatial frequency information. This is the important distinction between holography and OCT. A holographic image can be acquired in a single shot, and the image can be digitally refocused. Taking an image at a different focus position does not give any new information. OCT requires taking images at different focus positions but then a full 3D data set can be obtained. Interferometry can also be performed using a spatially incoherent source, with a full-field image being recorded on a CCD detector. In this case the behaviour is similar to that in OCT, except that optical sectioning occurs as a result of a coherence effect rather than a confocal effect [7]. A pixel of the detector integrates over the interference pattern of the object beam and the reference beam, with the result that the signal cancels except for the component of the object beam whose wave front coincides with that of the reference beam. As in OCT, a source of low temporal coherence is often used to give a combination of sectioning effects, in this case from low spatial and temporal coherence. A low coherence interference system is sometimes called full-field OCT (FFOCT), to stress the similarities in their imaging properties [8, 9]. The interference term of the image is similar in the two cases if the condenser lens for FFOCT has an aperture equal to that of the imaging lens. But if the condenser lens is stopped down, imaging becomes the same as for a coherent imaging system, identical to the holographic case [10]. The background term from the object beam is also different in the two cases: for OCT it is a confocal image, but in FFOCT it is a conventional partially coherent image [11].

HOLOGRAPHIC TOMOGRAPHY In confocal or low coherence interferometry, the object is illuminated with a range of illumination angles, and images from these components are summed coherently. In holographic tomography, the images for different illumination directions are recorded separately, and these can then be combined digitally [12-14]. If the strength of the illumination beams for different angles is the same as for confocal interferometry, the CTF will be the same. This requires rotating the propagation direction of illumination beam about two axes. Sometimes it is preferable to rotate the illumination beam about one axis only as this is faster to perform. This is equivalent to a confocal system with an illumination lens that is a cylindrical lens, which has a pupil represented by an Ewald cylinder rather than an Ewald sphere. The overall CTF has a cut-off that we have described as a ‘peanut’ [15]. As might be expected, the spatial frequency bandwidth in the transverse direction corresponding to the rotation is twice that in the other transverse direction. We have calculated this for systems of both low and high numerical aperture (in the scalar approximation). In both cases the CTF is obtained as an analytic expression. For the low numerical aperture case the CTF was calculated as the one-dimensional Fourier transform of the defocused CTF. For the high numerical aperture case, we found it more convenient to calculate directly in 3D spatial frequency space [16]. The results for low and high

numerical aperture are similar. The range of missing spatial frequencies is different from the usual missing cone, the shapes predicted by the low and high numerical aperture theories also differing in detail. The variation in the strength of the CTF within the pass band was also calculated [16]. m

m

C (sin !, 0, cos !)

!

" !

s

(a)

C s

(b)

FIGURE 3. (a) .The Ewald sphere construction for holographic tomography. In (a), for rotation of the illumination, the Ewald sphere for the illumination translates such that its centre lies on a line through the origin at an angle θ. In (b), for rotation of the sample, the Ewald sphere for illumination rotates such that its centre lies on a line through the origin at an angle θ.

An alternative to rotating the illumination is to rotate the sample relative to a stationary optical system. Fig. 3 shows how for rotation of the illumination, the Ewald sphere translates (a), but for rotation of the object, the Ewald sphere rotates (b). Rotation of the sample gives a CTF with a diablo shape, with a region of missing spatial frequencies of toroidal shape [17].

TOMOGRAPHIC IMAGING AND RECONSTRUCTION The original paper by Wolf [1] derived image formation assuming the Born approximation. According to this model each point of the object behaves as a point source, with a strength related to the local refractive index. The 3D Fourier transform of the object function is the object spectrum. The spatial frequency content of the image is given by multiplying by the 3D CTF of the optical system. A more general approach to image formation is based on the angular spectrum method [18]. The object can be described by a scattering function that depends on the directions of incidence and scattering. The scattering function is a function of four variables, two direction cosines each for illumination and scattering wave vectors, and can be calculated using different forms of rigorous diffraction theory, such as the coupled wave, modal or integral equation methods. However, it is found that under various different conditions the scattering function reduces to the 3D object spectrum [19]. The Born approximation is one example when this is the case, but there are others such as the thin phase screen approximation or the case of spherical symmetry. One important case is for the reflection geometry. It is known that scattering from a rough surface or from a layered medium can be approximately described by the Kirchhoff approximation [19]. In all these cases, the 3D CTF can be used to reconstruct the object refractive index data. Perhaps the most successful model for image reconstruction is based on the Rytov approximation [20-23]. For incidence by a plane wave, we process the logarithm of the scattered field. In this case the individual holographic images should be processed, so that the holographic tomography method is more powerful than confocal interferometry for Rytov reconstruction because any particular value of 3D spatial frequency in the confocal image contains contributions from many different illumination directions.

REFERENCES 1. 2. 3. 4. 5. 6.

E. Wolf, Optics Comm. 1, 153-156 (1969). C. J. R. Sheppard, Optik 72, 131-133 (1986). C. J. R. Sheppard, M. Gu, and X. Q. Mao, Optics Comm. 81, 281-284 (1991). B. R. Frieden, J. Opt. Soc. Am. 57, 56-66 (1967). C. J. R. Sheppard, M. Gu, Y. Kawata, and S. Kawata, J. Opt. Soc. Am. A 11, 593-598 (1994). L. Mertz, Transformations in Optics, New York: Wiley, 1965.

7. M. Davidson, K. Kaufman, I. Mazor, and F. Cohen, Proc. SPIE 775, 233-247 (1987). 8. E. Beaurepaire, A.-C. Boccara, M. Lebec, L. Blanchot, and H. Saint-Jalmes, Opt. Lett. 23, 244-246 (1998). 9. M. Roy and C. J. R. Sheppard, Optics and Lasers in Engineering 37, 631-641 (2002). 10. S. S. Kou and C. J. R. Sheppard, Optics Express 15, 13640-13648 (2007). 11. C. J. R. Sheppard, M. Roy, and M. D. Sharma, Appl. Opt. 43, 1493-1502 (2004). 12. V. Lauer, J. Microsc. 205, 165-176 (2002). 13. F. Charrière, M. A, F. Montfort, J. Kühn, T. Colomb, E. Cuche, P. Marquet, and C. Depeursinge, Opt. Lett. 31, 178-180 (2006). 14. W. Choi, K. Fabg-Yen, K. Badizadegan, S. Oh, N. Lue, R. Dasari, and M. Feld, Nature Methods 4, 717-719 (2007). 15. S. S. Kou and C. J. R. Sheppard, Opt. Lett. 33, 2362-2364 (2008). 16. S. S. Kou and C. J. R. Sheppard, Appl. Opt. 48, H168-H175 (2009). 17. S. Vertu, J.-J. Delauny, I. Yamada, and O. Haeberlé, Central European Journal of Physics 7, 22-31 (2009). 18. C. J. R. Sheppard and J. T. Sheridan, Proc. SPIE 1139, 32-40 (1989). 19. C. J. R. Sheppard, T. J. Connolly, and M. Gu, 117, 16-19 (1995). 20. L. Chernov, Wave Propagation in a Random Medium (McGraw-Hill, New York, 1969). 21. Devaney and G. C. Sherman, SIAM Review 15, 765-786 (1973). 22. K. Iwata and R. Nagata, Japan J. Appl. Phys. 14, Suppl. 14-11 (1975).A. J. 23. R. F. Mueller, M. Kaveh, and G. Wade, Proc. IEEE, 67, 567-587 (1979).