Depth of field multiplexing in microscopy - OSA Publishing

0 downloads 0 Views 2MB Size Report
C. Maurer, A. Jesacher, S. Bernet, and M. Ritsch-Marte, “Phase contrast microscopy with full numerical aperture illumination,” Opt. Express 16, 19821–19829 ...
Depth of field multiplexing in microscopy Christian Maurer, Saranjam Khan, Stephanie Fassl, Stefan Bernet*, and Monika Ritsch-Marte Innsbruck Medical University, Division for Biomedical Physics, M¨ullerstraße 44, 6020 Innsbruck, Austria *[email protected]

Abstract: We demonstrate ”depth of field multiplexing” by a high resolution spatial light modulator (SLM) in a Fourier plane in the imaging path of a standard microscope. This approach provides simultaneous imaging of different focal planes in a sample with only a single camera exposure. The phase mask on the SLM corresponds to a set of superposed multi-focal off-axis Fresnel lenses, which sharply image different focal planes of the object to non-overlapping adjacent sections of the camera chip. Depth of field multiplexing allows to record motion in a three dimensional sample volume in real-time, which is exemplarily demonstrated for cytoplasmic streaming in plant cells and rapidly swimming protozoa. © 2010 Optical Society of America OCIS codes: (110.0180) Microscopy; (110.6880) Three dimensional image acquisition; (230.3720) Liquid-crystal devices.

References and links 1. N. A. Riza, M. Sheikh, G. Webb-Wood, and P. G. Kik, “Demonstration of three-dimensional optical imaging using a confocal microscope based on a liquid-crystal electronic lens,” Opt. Eng. 47, 063201 (2008). 2. S. Hasinoff and K. Kutulakos, “Light-Efficient Photography,” Computer Vision ECCV 2008, 45–59 (2008). URL http://dx.doi.org/10.1007/978-3-540-88693-8_4. 3. S. Djidel, J. K. Gansel, H. I. Campbell, and A. H. Greenaway, “High-speed, 3-dimensional, telecentric imaging,” Opt. Express 14, 8269–8277 (2006). URL http://www.opticsexpress.org/abstract.cfm?URI= oe-14-18-8269. 4. C. Iemmi, J. Campos, J. C. Escalera, O. Lopez-Coronado, R. Gimeno, and M. J. Yzuel, “Depth of focus increase by multiplexing programmable diffractive lenses,” Opt. Express 14, 10,207–10,219 (2006). 5. P. Langehanenberg, L. Ivanova, I. Bernhardt, S. Ketelhut, A. Vollmer, D. Dirksen, G. Georgiev, G. von Bally, and B. Kemper, “Automated three-dimensional tracking of living cells by digital holographic microscopy.” J. Biomed. Opt. 14, 018 (2009). 6. P. Ferraro, M. Paturzo, P. Memmolo, and A. Finizio, “Controlling depth of focus in 3D image reconstructions by flexible and adaptive deformation of digital holograms,” Opt. Lett. 34, 2787–2789 (2009). URL http://ol. osa.org/abstract.cfm?URI=ol-34-18-2787. 7. J. Rosen and G. Brooker, “Non-scanning motionless fluorescence three-dimensional holographic microscopy,” Nat. Photonics 2, 190–195 (2008). 8. R. Ng, “Fourier slice photography,” ACM Trans. Graph. 24, 735–744 (2005). URL http://dx.doi.org/ 10.1145/1073204.1073256. 9. E. R. Dowski and W. T. Cathey, “Extended Depth Of Field Through Wave-Front Coding,” Appl. Opt. 34, 1859– 1866 (1995). 10. W. T. Cathey and E. R. Dowski, “New Paradigm for Imaging Systems,” Appl. Opt. 41, 6080–6092 (2002). URL http://ao.osa.org/abstract.cfm?URI=ao-41-29-6080. 11. M. McGuire, W. Matusik, H. Pfister, B. Chen, J. F. Hughes, and S. K. Nayar, “Optical Splitting Trees for HighPrecision Monocular Imaging,” IEEE Comput. Graphics Appl. 27, 32–42 (2007). 12. C. Maurer, A. Jesacher, S. Bernet, and M. Ritsch-Marte, “Phase contrast microscopy with full numerical aperture illumination,” Opt. Express 16, 19821–19829 (2008).

#120506 - $15.00 USD

(C) 2010 OSA

Received 25 Nov 2009; revised 15 Jan 2010; accepted 22 Jan 2010; published 28 Jan 2010

1 February 2010 / Vol. 18, No. 3 / OPTICS EXPRESS 3023

13. P. M. Blanchard and A. H. Greenaway, “Simultaneous multiplane imaging with a distorted diffraction grating,” Appl. Opt. 38, 6692–6699 (1999). 14. Y. Park, G. Popescu, K. Badizadegan, R. R. Dasari, and M. S. Feld, “Diffraction phase and fluorescence microscopy,” Opt. Express 14, 8263–8268 (2006). URL http://www.opticsexpress.org/abstract. cfm?URI=oe-14-18-8263. 15. J. A. Davis and D. M. Cottrell, “Random Mask Encoding Of Multiplexed Phase-Only And Binary Phase-Only Filters,” Opt. Lett. 19, 496–498 (1994). 16. R. D. Leonardo, F. Ianni, and G. Ruocco, “Computer generation of optimal holograms for optical trap arrays,” Opt. Express 15, 1913–1922 (2007). URL http://www.opticsexpress.org/abstract.cfm?URI= oe-15-4-1913. 17. P. M. Blanchard and A. H. Greenaway, “Broadband simultaneous multiplane imaging,” Opt. Commun. 183, 29–36 (2000). 18. S. F¨urhapter, A. Jesacher, C. Maurer, S. Bernet, and M. Ritsch-Marte, Spiral phase microscopy, Imag. Electron Physics 146 ed. (Pergamon Press, 2007).

1.

Introduction

High resolution imaging normally demands objectives with a high numerical aperture. Such objectives also offer high axial resolution and high optical sectioning capability. Consequently, in extended and dynamic samples, such as micro-organisms moving through a sample volume, an observer can easily miss interesting processes happening in other axial planes. Therefore techniques have been developed to gather volumetric imaging data in a short time. For example, fast mechanical stages were used to move the sample to different positions and image sample layers sequentially. Or, taking the ”opposite” approach, axial scanning was carried out in a scanning confocal microscope by means of an electronically controlled liquid crystal lens [1]. Alternatively the effective ”depth of focus” (the ”image-side version” of depth of field) can be extended by recording a rapid sequence of images with different focal length settings of the objective [2]. This can also be achieved electronically with a spatial light modulator [3]. A similar effect was achieved by a spatial light modulator (SLM) in the imaging path displaying a modified Fresnel lens phase profile with a larger depth of focus [4]. It is also possible to record a hologram of the sample, and then reconstruct the image in different axial planes by numerical post-processing of the holographic data [5,6]. Recently, this was also achieved for fluorescence light [7]. Furthermore, digital refocussing is possible, if a phase-sensitive recording of the light field is carried out, as, e.g., in Fourier Slice Photography [8]. The usual compromise between light gathering capability and large depth of field can be avoided in wave-front coding systems [9], where aspherical optical elements in combination with digital processing are used. This combination makes it possible to adapt the point spread function of the optical system in a way that an extended depth of field can be achieved without the need for a reduced imaging aperture [10]. Methods where the axial resolution of the objective lens is sacrificed for the sake of a larger depth of field are less desirable than methods where this is not the case. If the imaging path is ”multiplexed” with beam splitters into multiple paths, each with a different focal length and its own camera for imaging, full axial resolution of the microscope objective in each of the recorded images is maintained [11]. In this paper we demonstrate a similar such method, using an SLM to split the imaging path into multiple outputs, each with a different effective focal length and a different propagation direction. The corresponding images are then recorded simultaneously with a single camera in adjacent sections of the camera chip. Besides reduced complexity in the optical design, our implementation has the advantage of an increased flexibility for dynamically adapting the size of the observed volume by simply re-programming the displayed phase mask. Our setup is based on a standard microscope, where an additional SLM is implemented in a Fourier plane of the imaging pathway, similar to the setups presented in [4, 12, 13]. The SLM #120506 - $15.00 USD

(C) 2010 OSA

Received 25 Nov 2009; revised 15 Jan 2010; accepted 22 Jan 2010; published 28 Jan 2010

1 February 2010 / Vol. 18, No. 3 / OPTICS EXPRESS 3024

4 3 2 z = 3.0μm

z = 1.8μm

z (μm)

z = 4.2μm

1 z (μm)

0

4

-8

-1 0 z = 0.6μm

z = -0.6μm

-4 -6

-3 -4 -5 z = -1.8μm

z = -3.0μm

z = -4.2μm

-4

-2

0 5 y (μm)

5

0 -5 x (μm)

0 -4

x (μm)

-2

0 y (μm) 2

4 4

6

8

Fig. 1. In ”depth of field multiplexing” the camera chip is divided into panel sections that record different SLM-steered focal settings simultaneously. Here the sample consists of 4.5 µm polystyrene beads in agarose gel, imaged in steps of 1.2 µm. On the left we see the raw data, a single exposure of the camera. In this image (but not in the following examples with phase masks optimized to avoid this) the central panel containing the undiffracted (zeroth order) light is overexposed and thus has been replaced by a homogeneous rectangle. The data can also be arranged as a z-stack (middle), which can also be merged for a quasi3D appearance of the image (right).

(Holoeye 1080P) consists of a miniaturized liquid crystal screen, with a resolution of 1920 × 1080 pixels. Each pixel has a quadratic shape with an edge length of 8 µm. By sending the output from a computer graphic card to the SLM, each gray value of a displayed image pixel is transformed into a corresponding phase shift in a range between 0 and 2π of a (linearly polarized) light beam reflected off the respective SLM pixel. Hence the SLM acts as a phase mask which can be re-programmed at video rate (60 Hz), and which has a resolution that is high enough to display computer generated phase holograms, or to act as a diffractive optical element, like a Fresnel lens or a prism array. For an optimal operation of our phase-only SLM the linear polarization of the incoming beam has to be aligned parallel to the long axis of its liquid crystal molecules. The total light efficiency of the SLM is on the order of 40 %, which is mainly limited by the absorption of the display. For linearly polarized light the relative diffraction efficiency, i.e. the ratio of the first order diffracted beam to the total light leaving the device is on the order of 80 %. In this case only 10 % of the outcoming light remains unmodulated, corresponding to the zeroth diffraction order. In the current setup our SLM is programmed to act as a multi-focal off-axis Fresnel lens, i.e., as a set of superposed Fresnel lenses with different focal lengths, where additionally each of the embedded lenses diffracts the incoming light into a different direction, targeting a different area in the camera plane. Thus the camera records a set of images in separate regions of its image sensor, each showing a sharp image of a particular axial section of the sample. Note that the phase mask does not act as a lens array (e.g., as individual lenses next to each other), but, instead, each of the included lenses is distributed over the whole SLM area. Therefore each lens acts on the complete image wave field by first adding a selected parabolic phase shift (a ”lens term”) which determines the plane of sharpness, and by secondly adding a linear phase shift (a ”prism-term”) which determines the diffraction angle and thus the position of the corresponding image in the camera plane. We typically program the SLM to produce 9 output

#120506 - $15.00 USD

(C) 2010 OSA

Received 25 Nov 2009; revised 15 Jan 2010; accepted 22 Jan 2010; published 28 Jan 2010

1 February 2010 / Vol. 18, No. 3 / OPTICS EXPRESS 3025

z = 125μm

z = 18μm

z = -54μm

z = 90μm

z = 0μm

z = -90μm

z = 54μm

z = -18μm

z = -125μm

Fig. 2. Multiplexed images of swimming Euglena protozoans: One recorded frame from a slow motion video movie (Media 1) with 8 diffracted sub-images arranged around the central undiffracted (zeroth order) beam. The corresponding axial shift in the sample volume is indicated in the figure. The scale bar corresponds to 10 µm.

images, arranged in a quadratic 3 × 3 matrix on the camera chip. A first example is shown in Fig. 1, where 4.5 µm diameter polystyrene beads fixed in agarose gel were recorded with a relative axial spacing of 1.2 µm between adjacent images. The sample was illuminated with a high NA oil-condenser (NACond = 1.3) and imaged with a 100× oil immersion objective (NAObj = 1.3). The corresponding raw data can be seen on the left as a single multi-focus image including 9 axial planes in a range from -4.2 µm to +4.2 µm. The same data displayed as a z-stack is shown in the middle, and on the right the image stack was high pass-filtered and then merged to give a three dimensional impression (though not a tomographic reconstruction) of the beads. Advancing to biological samples, we imaged Euglena protozoans swimming in water with a 40× water objective lens. Figure 2 shows a single frame taken from a video file (Media 1), which is slowed down by a factor of 3 to visualize the fast motion of the Euglena protozoans. The axial positions cover a range from - 125 µm to + 125 µm. Each sub-image is sharply focused at a different axial plane. Thus an observer can monitor the motion of the specimens simultaneously and in real time at 9 axial positions. Note that in contrast to Fig. 1 now also the central panel contains meaningful information, since now an optimized phase mask was displayed at the SLM, as will be explained below. 2. 2.1.

Experimental issues Setup and image formation

A sketch of the generic setup is drawn in Fig. 3. A laser beam (frequency doubled CW Nd:YAG with a wavelength of 532 nm and a typical power of 50 mW) is expanded and illuminates a #120506 - $15.00 USD

(C) 2010 OSA

Received 25 Nov 2009; revised 15 Jan 2010; accepted 22 Jan 2010; published 28 Jan 2010

1 February 2010 / Vol. 18, No. 3 / OPTICS EXPRESS 3026

rotating diffuser plate, which acts as the effective illumination source. The rotating diffuser is employed in order to suppress laser speckle in the output image by time averaging. An alternative way to reduce speckle by sending the light through a multi-mode fiber was demonstrated in [14]. As can be seen in Fig. 3 the diameter of the illuminated spot (a) determines the numerical illumination aperture, which is matched to the aperture size of the condenser (b). This leads to a homogeneously illuminated sample volume which is imaged with the objective lens. In principle the SLM should be placed in the back focal plane of the objective to modulate the Fourier transform of the object wave. However, since the Fourier plane of the objective is inside the objective tube, and the SLM is not a transmissive, but a reflective device, additional relay optics (consisting of a tube lens with a focal length f = 160 mm and an achromatic lens with f = 150 mm) is inserted behind the objective to image the Fourier plane at the surface of the SLM. The phase pattern displayed on the SLM is programmed to split the incoming image wave into multiple output waves which travel with different divergencies to an ocular lens, which images the multiplexed output waves sharply in adjacent sectors of a CCD camera. The maximal diffraction angle of the SLM is limited by its pixel size of 8 μ m to less than 2 ◦ . Therefore, to separate the individual diffraction orders, the field of view has to be confined. This is done by an adjustable slit at the intermediate image plane between the two relay lenses, as indicated in Fig. 3. In a standard microscope without SLM the axial position z of a sharply imaged plane in the sample volume is normally located at a distance z = fOb j from the objective lens, where fOb j is the effective focal length of the objective. By positioning the SLM in a Fourier plane of the imaging path, this axial position can be shifted by an amount Δz by displaying a Fresnel lens phase pattern with a programmable focal length fSLM on the SLM surface. The axial shift of the sharply displayed sample plane is then related to the focal length of the displayed Fresnel lens by [3] 2 (1) Δz = − fOb j / f SLM . The effect of a thin lens with focal length f on an incoming wave with wavelength λ is to modulate its wavefront with a spatially dependent phase offset ϕ (x, y) = (x2 + y2 )π /λ f , called the ”lens-term”. Therefore, a corresponding phase pattern which can be displayed on the SLM surface in order to produce a Fresnel lens with a focal length of fSLM can be calculated as   π ϕ (x, y) = mod2π (x2 + y2 ) , (2) λ fSLM where the cartesian transverse coordinates x and y are measured from the center of the lens and mod2π denotes the modulo 2π operation. The modulo operation generates a blazed Fresnel lens with a theoretically achievable diffraction efficiency of 100 % for the designed wavelength λ , if the phase shift of each SLM pixel can be continuously controlled in a range between 0 and 2π . The resolution of the pixellated SLM limits the minimal focal length fSLM of a sufficiently resolved Fresnel lens on the SLM. The resolution limit is determined by the condition that the maximal phase gradient |∇ϕ | in the displayed phase pattern has to be smaller than π per pixel diameter p, which would correspond to a binary grating with a grating period of 2 pixels, i.e. |∇ϕ |


2rmax p. λ

(4)

With a diameter of the displayed Fresnel lens of rmax = 4 mm, an SLM pixel size of p = 8 µm, and an illumination wavelength of λ = 532 nm, the smallest realizable focal length is approximately fSLM,min = ±150 mm. The maximal shift Δzmax of the sharply imaged plane within the sample is determined according to Eq. (1) by the effective focal length of the employed objective, which is about 1.6 mm for a 100× objective, or 16 mm for a 10× objective. Using this condition the maximal possible shift Δzmax can vary between 17 µm and 1.7 mm for a 100× or 10× objective, respectively. In our particular setup the telescope system in front of the SLM has to be considered additionally, since it decreases the effective refractive power of the Fresnel lens, so that the obtainable shift Δzmax is reduced by a factor of (150/160)2 , corresponding to the square of the ratio of the focal lengths between the employed telescope lenses. In the experiment it turned out that a significant loss of diffraction efficiency was already obtained for a focal length of the displayed Fresnel lens which is larger than the above estimated 150 mm, namely about 200 mm. This is due to a loss of diffraction efficiency at the outer border of the displayed Fresnel lens, due to onset of undersampling. The quantitative dependence of the diffraction efficiency on the focal length is experimentally studied later in this paper. For the desired additionally shift of the image in the camera plane to another lateral position, a linear phase offset ϕ (x, y) = gx x + gy y (a ”prism-term”) has to be added to the phase profile of the Fresnel lens [Eq. (2)], before the modulo 2π operation is performed. This transforms the original on-axis Fresnel lens into an off-axis lens and changes the propagation direction of the outcoming (diffracted) beam. In this case the image is shifted by a distance of Δr = (gx ex + gy ey ) fL λ /2π in the camera plane, where fL is the focal length of the ocular lens, and ex and ex are unit vectors in x and y direction, respectively. In the current setup the grating constants of the superposed linear gratings are 5 pixel (40 µm) in x, and 7 pixel (56 µm) in y direction, respectively, which means gx = 2π /40 µm and gy = 2π /56 µm. For a focal length of the ocular lens of fL = 200 mm this leads to an image shift of Δx ≈ 2.7 mm and Δy ≈ 1.9 mm, which also corresponds to the size of each sub-image on the CCD chip. The corresponding splitting of the output directions was adapted to optimally fill the employed CCD chip with an edge size of 9 mm × 7 mm and a resolution of 1380×1035 pixels. 2.2.

Phase mask optimization

In order to produce a set of N images of different axial planes, which travel to different sections of the camera chip, it would, in principle, be possible to calculate the individual off-axis Fresnel lenses for each desired axial sample plane with the method explained above, and then to superpose all of the corresponding complex transmission functions by just adding them according to Ttotal = ∑ exp(iϕ j ), where the index j runs from 1 to N. However, the result of this operation is a complex function Ttotal which does not only contain phase modulations, but also amplitude modulations. Since our SLM acts as a pure phase modulator, it cannot display this additional amplitude modulation, which is necessary to achieve the desired output without artifacts. Various methods have been developed to deal with this problem. A simple, but effective method called random mask encoding [4, 15] divides the array of available SLM pixels into N randomly scattered, complementary sub-arrays, which are allocated to the N individual phase patterns to be displayed. Each of these phase patterns is then only displayed at the positions belonging to its sub-array. However, this method suffers from a quadratic decrease in diffraction efficiency with increasing number of superposed patterns. For example, in the case of a phase mask which produces 8 output images, the intensity of each individual image will be reduced by #120506 - $15.00 USD

(C) 2010 OSA

Received 25 Nov 2009; revised 15 Jan 2010; accepted 22 Jan 2010; published 28 Jan 2010

1 February 2010 / Vol. 18, No. 3 / OPTICS EXPRESS 3029

A

B 1

weighted Gerchberg-Saxton

1

diffracted intensity

relative units

relative intensity

0.5 −20

−15

−10

−5

1

contrast

0,1

0 0

5

10

15

20

10

15

20

10

15

20

relative units

0.5

random mask encoding −20

−15

−10

magnification

−5

0 0

5

1.04 relative units 1

1

10

number of images

−20

−15

−10

0.96 −5 0 5 refractive power (1/m)

Fig. 4. Performance of beam multiplexing phase masks. A: Experimentally measured diffraction efficiencies of phase masks produced by the random mask encoding and by the wGS methods, respectively, as a function of the number of different output beams. B: diffraction efficiency, contrast and magnification factor of a wGS phase mask as a function of the refractive power of the programmed lens term.

a factor of 1/64 as compared to a single output image. Residual light that is diffusely scattered by the phase mask then results in an undesired background in the recorded images. This kind of efficiency loss can be prevented by the weighted Gerchberg-Saxton (wGS) algorithm [16]. The GS algorithm is an iterative search algorithm, which aims to find an optimal phase-only pattern to generate the desired output beams. In the optimization process the numerically reconstructed output image is compared with the desired one. Depending on the result the phase pattern is iteratively changed until the desired output image is obtained. In this case the incoming light intensity is almost optimally used, and for our task of producing N multiplexed images the intensity of each image is proportional to 1/N. A quantitative comparison between the efficiencies of the random mask encoding and the wGS algorithm to produce multiplexed output images is presented in Fig. 4(a). In Fig. 4(a) the averaged intensity of a sharply imaged plane of a test sample is plotted as a function of the number of simultaneously diffracted images for both, the random mask encoding and the wGS methods. For both methods the intensity was uniform over the whole field of view. However, the comparison shows that for the random mask encoding method the image intensity falls off quadratically with the number of output images (exact fitting result: I ∝ N −2.09±0.009 ), whereas there is only a linear dependence for the case of the wGS algorithm (I ∝ N −1.07±0.08 ), as expected. For the case of eight multiplexed output images produced by the wGS algorithm, the diffracted intensity of each image is similar to the intensity of the undiffracted (zeroth order) beam in the center of the camera plane. This undiffracted image still appears since the relative diffraction efficiency of the used SLM only reaches 80 % instead of the optimally achievable 100 %, due to its limited fill factor. It should be emphasized that for the wGS algorithm the total intensity of all N diffracted images remains almost constant. This is not the case for the random mask encoding method, where the cumulative intensity of the N images drops with a factor of 1/N. In this case, the intensity in the zeroth diffraction order increases accordingly. As an example, the random mask method was used for the images displayed in Fig. 1, resulting in an overexposed central zero order image. On the contrary, the wGS method was used for the images in Fig. 2 and Fig. 5, and #120506 - $15.00 USD

(C) 2010 OSA

Received 25 Nov 2009; revised 15 Jan 2010; accepted 22 Jan 2010; published 28 Jan 2010

1 February 2010 / Vol. 18, No. 3 / OPTICS EXPRESS 3030

there all intensities of the 8 diffracted images and the central zero order image are similar. 2.3.

Assessment of imaging quality

Another important characterization of multi focal imaging is the dependence of the image parameters on the refractive power of the programmed Fresnel lens. For our SLM the maximal applicable refractive power is according to Eq. (4) limited to about 5 m−1 by the pixel size (8 µm) and the radius of the displayed phase mask (500 pixel). As explained above, for a higher refractive power the lens is undersampled at the outer border, which reduces its effective diameter. This leads to a reduction of the image resolution and vignetting effects may appear. To quantify this we compared magnification, sharpness, and contrast of multiplexed images taken for 3 different refractive powers, 1/ fmin = 5 m−1 , 7.5 m−1 , and 10 m−1 , which increasingly violate the undersampling limit. The refractive power within the panel was multiplexed in equidistant steps from −1/ fmin to +1/ fmin . Looking for a well-characterized test sample for image quality assessment, we decided to use a thin amplitude mask, namely a resolution target from Edmund Optics. For each lens term displayed on the SLM the position of the test pattern was re-adjusted to get a sharp image. The experiment was performed with an air condenser (NA = 0.3) for illumination and a 40× water objective (NA = 0.75) for imaging, where a refractive power change of 1/ f = 5 m−1 of the Fresnel lens corresponded to an axial shift of 145 µm of the sharply imaged sample plane. The experimental results are summarized in Fig. 4(b). The diffracted intensity (indicated by red ”+” symbols) as a function of the refractive power of the corresponding Fresnel lens has a rather broad maximum in a refractive power range between -5 m−1 and +5 m−1 , and then decreases for higher absolute values of the refractive power, as expected in view of our previous considerations. A similar dependence is also found for the image contrast (indicated by blue ”×” symbols), defined as the ratio between sharp image structures (the bars in the resolution target) and the background. The contrast decreases for higher refractive powers due to the loss of diffraction efficiency which generates a background of diffusely scattered light. The sharpness of the image, on the other hand, is not affected even in cases where the loss of contrast and brightness is already noticeable. It might be argued that the image magnification depends linearly on the refractive power of the displayed Fresnel lens, which would give rise to different scales in different panels. The reason for this is the fact that the effective focal length of the combined imaging system (consisting of objective, Fresnel lens, and imaging lens) also changes linearly as a function of the refractive power of the Fresnel lens. This is in principle confirmed by our experimental data [cf. Fig. 4(b) (open circles)], however, the experimentally observed change in magnification is only on the order of 1 % for a refractive power change of 1/ f = 5 m−1 . Moreover, in the optimal working range, where the modulation of the refractive power is below 5 m−1 , the lateral expansion will give rise to even smaller magnification effects. In an earlier publication the change in magnification has been analytically calculated, and a method to avoid it by a telecentric imaging system has been demonstrated [3]. 3.

Dynamic samples

The presented setup allows the observation of dynamical processes in real time (or slow motion) within different focal planes simultaneously. Figure 5 and Video 2 (Media 2) show an example of multiplexing for a dynamical process, namely cytoplasmic streaming inside plant cells. The presented images were recorded with a 40× air objective (effective NA of 0.6) used as an illumination condenser, and a 40× water immersion objective (NA = 0.75) as the imaging objective. Figure 5 represents an individual frame (recorded with an exposure time of 50 ms) extracted #120506 - $15.00 USD

(C) 2010 OSA

Received 25 Nov 2009; revised 15 Jan 2010; accepted 22 Jan 2010; published 28 Jan 2010

1 February 2010 / Vol. 18, No. 3 / OPTICS EXPRESS 3031

z = 8.5μm

z = 1.2μm

z = -3.6μm

z = 6.0μm

z = 0.0μm

z = -6.0μm

z = 3.6μm

z = -1.2μm

z = -8.5μm

Fig. 5. Multiplexed images of cytoplasmic streaming in the stamen hair cells of a tradescantia flower. The figure shows a single frame (unprocessed data) obtained with SLMbased depth of field multiplexing. The image consists of 8 sub-images arranged around the central zeroth order image, which sharply display different focal planes within the sample with a relative axial displacement of 2.4 µm. The three pairs of arrows (red, orange, yellow) indicate transitions where the streaming path can be seen to change from one slice to the neighboring one. The corresponding movie (Media 2) is shown in real time, with a recording frame rate of 15 Hz.

from a real-time movie showing cytoplasmic streaming inside a stamen hair cell of a tradescantia sp. flower. Each camera frame consists of 9 sub-images, sharply displaying 9 different focal planes with a relative axial displacement of 2.4 µm between adjacent sub-images. The three pairs of arrows (red, orange, yellow) indicate transitions of the streaming path from one slice to the neighboring one. Each slice has a depth of field of roughly 1 µm. Since organelles are also visible when they are slightly out of focus, it is possible to follow their path through the volume of the cell body. The figure and the related video movie (Media 2) also show that the intensities of all 9 sub-frames, including the central zeroth order (undiffracted) image, appear with equal intensity levels. This is not achieved by post-processing (the images show raw data), but solely by using an optimizing wGS algorithm for the calculation of the SLM phase mask, as explained above. The temporal resolution of our imaging method is limited by the need of time averaging in order to avoid a disturbing speckle pattern. Since the diffuser rotates with a frequency of 250 Hz, the tangential velocity of the diffuser plate at the position of the laser spot (at about 1 cm distance from the center) is on the order of 15 m/s. For a typical laser spot size of 5 mm, the plate thus moves by about 30 spot diameters during a typical camera exposure time of 10 ms, creating a corresponding number of different speckle patterns which are averaged. The experiment has shown, that an averaging time of 1 ms is sufficient to reduce the contrast of the disturbing speckle pattern to an acceptable level. In principle, the necessity to reduce speckle by temporal averaging might be avoided by using an incoherent illumination source, like a light emitting diode. However, to date exposure times below 10 ms lead to an undesired temporal #120506 - $15.00 USD

(C) 2010 OSA

Received 25 Nov 2009; revised 15 Jan 2010; accepted 22 Jan 2010; published 28 Jan 2010

1 February 2010 / Vol. 18, No. 3 / OPTICS EXPRESS 3032

fluctuation of the recorded signal intensity, caused by the 60 Hz refreshing cycles of the SLM. 4.

Conclusions

The presented setup allows the simultaneous observation of different focal planes within a sample volume. This is achieved without a reduction of the axial or lateral resolution as compared to a standard imaging setup, i.e. the full numerical apertures of the illumination and the imaging paths are used and define the respective image resolution. These features are particularly useful for the imaging of dynamical processes in a sample, where other methods, like scanning through the sample volume, are too slow to study simultaneously occurring events. As an example it was possible to observe dynamical processes in a volume, such as cytoplasmic streaming in a plant cell, with high axial resolution (of 1 µm), and at video rate. In the demonstrated setup the generation of spatially multiplexed image paths with different beam divergencies is accomplished by a programmable phase-only SLM, which has the advantage that the imaging parameters, such as the axial separation of adjacent sharply imaged sample planes, can easily be adapted to the special needs of each particular sample by just re-programming the phase pattern displayed on the SLM. However, in principle it is also possible to produce a fixed transmissive diffractive optical element (DOE) which displays the same phase pattern as the SLM, and to directly insert it into the optical path of a standard microscope in order to generate a similar multiplexed output image. Such a DOE can be produced with an even higher spatial resolution (on the order of 1 µm, or below) than that of the employed SLM (with its 8 µm pixel size), allowing to address a larger field of view without producing a disturbing overlapping of the sub-images, and also to increase the axial extension of the sharply imaged sample volume. Advantageously, the alignment of such a DOE in the optical path of a microscope is rather uncritical, since a lateral misalignment would just result in a corresponding lateral shift of the output image, and also an axial misalignment around a Fourier plane of the imaging path has only a second order effect on the produced output images. The presented method is not only applicable to standard bright field imaging, but it can principally be combined with various contrast enhancing methods, as dark field-, phase contrast-, or differential interference contrast (Nomarski) microscopy, and even with fluorescence imaging. Particularly in this field there exist sophisticated methods to extract additional information from a stack of multi-focal images, like a computer aided sharpening by three dimensional deconvolution of the image stack, which produces a resolution which is almost comparable to confocal microscopy, or by generating pseudo three dimensional images which can be rotated and displayed from different perspectives. However, one drawback of the method compared to standard setups is its sensitivity to chromatic errors, since beam multiplexing by grating diffraction produces dispersion effects, i.e. the different wavelength components within multi-colored samples produce sub-images which are laterally shifted with respect to each other. Thus to date the method is mainly suited for the imaging of monochromatically illuminated samples, or fluorescence samples with a sufficiently small emission bandwidth. Several methods have been reported to compensate the dispersion effects, which could also be applicable in this setup [17, 18]. On the other hand there are no special requirements on the spatial coherence of the illumination source, allowing for example to use a spatially incoherent light source together with a high NA condenser for illumination, which optimizes the maximally achievable spatial resolution. One apparent disadvantage of the setup seems to be that the camera chip is divided into different sub-sectors, thus reducing the available number of pixels for each of the recorded sub-images. On the other hand, this does in principle not affect the achievable image resolution, provided that the image magnification is adapted to the pixel resolution of the camera. Nevertheless, if the resolution of a certain standard microscope is not limited by the NA of the #120506 - $15.00 USD

(C) 2010 OSA

Received 25 Nov 2009; revised 15 Jan 2010; accepted 22 Jan 2010; published 28 Jan 2010

1 February 2010 / Vol. 18, No. 3 / OPTICS EXPRESS 3033

employed objectives (as usual), but by the camera resolution, then the presented multiplexing method would result in a significant decrease of the lateral image resolution. However, this can be straightforwardly compensated by using a camera with a correspondingly higher number of pixels. Fortunately, modern cameras already have pixel resolution in the 10 mega-pixel range, with still increasing tendency. Image multiplexing could be an ideal application for mega-pixel camera chips, as maximal advantage can be drawn from their full resolution. Acknowledgments This work was supported by the Austrian Science Fund (FWF) Project No. P19582-N20 and the Higher Education Commission of Pakistan (HEC). M.R.-M. wishes to thank Carol Cogswell and her Applied Optics group for a demonstration of cytoplasmic streaming in Tradescantia flowers, which brought this interesting dynamical process to her attention.

#120506 - $15.00 USD

(C) 2010 OSA

Received 25 Nov 2009; revised 15 Jan 2010; accepted 22 Jan 2010; published 28 Jan 2010

1 February 2010 / Vol. 18, No. 3 / OPTICS EXPRESS 3034