Hyperspectral ghost imaging camera based on a flat ... - OSA Publishing

0 downloads 0 Views 4MB Size Report
Jun 22, 2018 - 1Key Laboratory for Quantum Optics and Center for Cold Atom Physics, ... Abstract: A spectral camera based on ghost imaging via sparsity ...
Vol. 26, No. 13 | 25 Jun 2018 | OPTICS EXPRESS 17705

Hyperspectral ghost imaging camera based on a flat-field grating S HENGYING L IU , 1,2 Z HENTAO L IU , 1 J IANRONG W U , 1 E NRONG L I , 1 C HENYU H U , 1,2 Z HISHEN TONG , 1,2 X IA S HEN , 1 A ND S HENSHENG H AN 1,* 1 Key

Laboratory for Quantum Optics and Center for Cold Atom Physics, Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, Shanghai 201800, China 2 University of Chinese Academy of Sciences, Beijing 100049, China

* [email protected]

Abstract: A spectral camera based on ghost imaging via sparsity constraints (GISC) acquires a three-dimensional (3D) spatial-spectral data cube of the target through a two-dimensional (2D) detector in a single snapshot. However, the spectral and spatial resolution are interrelated because both of them are modulated by the same spatial random phase modulator. In this paper, we theoretically and experimentally demonstrate a system by equipping the GISC spectral camera with a flat-field grating to disperse the light fields before the spatial random phase modulator, hence consequently decoupling the spatial and spectral resolution. By theoretical derivation of the imaging process we obtain the spectral resolution 1nm and spatial resolution 50µm about the new system which are verified by the experiment. The new system can not only modulate the spatial and spectral resolution separately, but also provide a possibility of optimizing the light field fluctuations of different wavelengths according to the imaging scene. © 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement OCIS codes: (110.0110) Imaging systems; (110.1758) Computational imaging; (110.4234) Multispectral and hyperspectral imaging; (110.6150) Speckle imaging; (300.6550) Spectroscopy, visible.

References and links 1. E. Herrala, J. T. Okkonen, T. S. Hyvarinen, M. Aikio, J. Lammasniemi, “Imaging spectrometer for process industry applications,” Proc. SPIE 2248(33), 33–40 (1994). 2. R. O. Green, M. L. Eastwood, C. M. Sarture, T. G. Chrien, M. Aronsson, B. J. Chippendale, J. A. Fausta, B. E. Pavria, C. J. Chovita, M. Solisa, M. R. Olaha, O. Williamsa, “Imaging spectrometer for process industry applications,” Remote Sens. Environ. 65(3), 227–248 (1998). 3. L. Gao, J. Liang, C. Li, L. V. Wang, “Single-shot compressed ultrafast photography at one hundred billion frames per second,” Nature 516(7529), 74–77 (2014). 4. N. A. Hagen, M. W. Kudenov, “Review of snapshot spectral imaging technologies,” Opt. Eng. 52(9), 090901 (2013). 5. L. Gao, L. V. Wang, “A review of snapshot multidimensional optical imaging: measuring photon tags in parallel,” Phys. Reports 616, 1–37 (2016). 6. S. K. Sahoo, D. Tang, C. Dang, “Single-shot multispectral imaging with a monochromatic camera,” Optica 4(10), 1209–1213 (2017). 7. L. Gao, R. T. Kester, T. S.Tkaczyk, “Compact Image Slicing Spectrometer (ISS) for hyperspectral fluorescence microscopy,” Opt. Express 17(15), 12293–12308 (2009). 8. M. W. Kudenov, J. M. Craven-Jones, C. J. Vandervlugt, E. L. Dereniak, R. W. Aumiller, “Faceted grating prism for a computed tomographic imaging spectrometer,” Opt. Eng. 51(4), 044002 (2012). 9. J. Hsieh, Computed Tomography: Principles, Design, Artifacts, and Recent Advances (Bellingham, WA: SPIE, 2014). 10. T. M. Cover, J. A. Thomas, Elements of Information Theory (John Wiley & Sons, 2012). 11. C. E. Shannon, “A mathematical theory of communication,” Bell system technical journal 27(3), 379–423 (1948). 12. A. A. Wagadarikar, N. P. Pitsianis, X. Sun, D. J. Brady, “Spectral image estimation for coded aperture snapshot spectral imagers,” Proc. SPIE 7076, 707602 (2008). 13. G. R. Arce, D. J. Brady, L. Carin, H. Arguello, D. S. Kittle, “Compressive coded aperture spectral imaging: An introduction,” IEEE Signal Process. Mag. 31(1), 105–115 (2014). 14. A. A. Wagadarikar, N. P. Pitsianis, X. Sun, D.J. Brady, “Compressive coded aperture spectral imaging: An introduction,” Opt. Express 17(8), 6368–6388 (2009). 15. J. Wu, X. Shen, H. Yu, Z. Chen, Z. Tao, S. Tan, S. Han, “Snapshot compressive imaging by phase modulation,” Acta Phys. Sin-CH ED 34(10), 1011005 (2014).

#330608 Journal © 2018

https://doi.org/10.1364/OE.26.017705 Received 2 May 2018; revised 13 Jun 2018; accepted 13 Jun 2018; published 22 Jun 2018

Vol. 26, No. 13 | 25 Jun 2018 | OPTICS EXPRESS 17706

16. Z. Liu, S. Tan, J. Wu, E. Li, X. S. Han, “Spectral camera based on ghost imaging via sparsity constraints,” Sci. Reports 6, 25718 (2016). 17. M. Giglio, M. Carpineti, A. Vailati, “Space intensity correlations in the near field of the scattered light: a direct measurement of the density correlation function g (r),” Phys. Rev. Lett. 85(7), 1416 (2000). 18. R. Cerbino, L. Peverini, M. A. C. Potenza, A. Robert, P. Bösecke, M. Giglio, “X-ray-scattering information obtained from near-field speckle,” Nature Phys. 4(3), 238–243 (2008). 19. D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006). 20. E.J. Candès, J. Romberg, T. Tao, “Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory 52(2), 489–509 (2006). 21. Y. C. Eldar, G. Kutyniok, Compressed Sensing: Theory and Applications (Cambridge University, 2012). 22. M. Elad, “Optimized projections for compressed sensing,” IEEE T. Signal Process. 55(12), 5695–5702(2007). 23. M. Chen, E. Li, S. Han, “Application of multi-correlation-scale measurement matrices in ghost imaging via sparsity constraints,” Appl. Optics 53(13), 2924-2928(2014). 24. J. M. Duarte-Carvajalino, G. Sapiro, “Learning to sense sparse signals: Simultaneous sensing matrix and sparsifying dictionary optimization,” IEEE T. Image Process. 18(7), 1395-1408(2009). 25. X. Xu, E. Li, X. Shen, S. Han, “Optimization of speckle patterns in ghost imaging via sparse constraints by mutual coherence minimization,” Chin. Opt. Lett. 13(7), 071101(2015). 26. J. M. Lerner, R. J. Chambers, G. Passereau, “Flat field imaging spectroscopy using aberration corrected holographic gratings,” Proc. SPIE 268, 122–128 (1981). 27. E. Sokolova, “Holographic diffraction gratings for flat-field spectrometers,” J. Mod. Optic. 47(13), 2377–2389 (2000). 28. A. Gatti, E. Brambilla, M. Bache, L. A. Lugiato, “Ghost imaging with thermal light: comparing entanglement and classicalcorrelation,” Phys. Rev. Lett. 93(9), 093602 (2004). 29. B. Luo, Z. Wen, Z. Wen, T. Zeng, “Design of concave grating for ultraviolet-spectrum,” Spectrosc. Spect. Anal. 32(6), 1717–1721 (2012). 30. S. Tan, Z. Liu, E. Li, S. Han, “Hyperspectral compressed sensing based on prior images constrained,” Acta Optica Sinica 35(8), 0811003 (2015). 31. A. F. Goetz, G. Vane, J. E. Solomon, B. N. Rock, “Imaging spectrometry for earth remote sensing,” Science 228(4704), 1147–1153 (1985). 32. F. F. Sabins, Remote Sensing: Principles and Applications (Waveland, 2007). 33. T. Zimmermann, J. Rietdorf, R. Pepperkok, “Spectral imaging and its applications in live cell microscopy,” FEBS letters 546(1), 87–92 (2003).

1.

Introduction

Spectral imaging is a multidimensional data acquisition technology combining spectroscopic and image analysis, which captures a three-dimensional (3D) spectral data-cube (x, y, λ) containing information about the imaging scene. With both spatial and spectral resolving capabilities, spectral imaging is extremely effective and vital for surveying scenes and extracting detailed information. Conventional spectral imaging, with point-to-point imaging mode, requires time-scanning along either the spatial or wavelength axis since the 3D spectral data-cube is detected slice-by-slice using a two-dimensional (2D) detector [1, 2]. Hence, the application is restricted in some fields where ultra-fast imaging is needed [3]. Recently, remarkable snapshot spectral imaging techniques have been widely developed to acquire a 3D spectral data-cube just in a single exposure [4–6], such as field-split imaging approach which splits the field-of-view (FoV) and then spreads 3D information to a 2D plane by spectroscopic devices [7], and computed tomography imaging approach which utilizes an orthogonal grating or other spectroscopic devices to project the 3D information tomography to a 2D plane and then numerically reconstructs the 3D data-cube [8, 9]. However, owing to no application of the correlation between pixels and wavelengths of image in the reconstruction algorithm, the image information acquisition efficiency of such above approaches is lower than the Shannon Limit [10, 11]. Another important category of snapshot spectral imaging approaches is coded aperture spectral imaging, such as the coded aperture snapshot spectral imager (CASSI), which utilizes a binary-coded mask and an equilateral prism to modulate the light fields and captures a spectral image with a single shot 2D measurement [12–14]. Combining with the compressive sampling principle, such spectral imager improves the sampling efficiency through the compressive sampling theory. Unlike other existing imaging systems, a spectral camera based on ghost imaging via sparsity constrains (GISC) was proposed [15,16]. GISC spectral camera modulates the image at the whole

Vol. 26, No. 13 | 25 Jun 2018 | OPTICS EXPRESS 17707

spectral band using a spatial random phase modulator [17, 18], which enables a 3D spectral data-cube to be captured through a single shot 2D measurement. Furthermore, combining with compressive sensing (CS) theory [19–21], GISC spectral camera can obtain the information at less than the Nyquist rate, which improves the utilization efficiency of the optical channel capacity and realizes the compressive sensing of the information during the imaging acquisition process [16]. The previously proposed GISC spectral camera is incapable of modulating the spatial and spectral resolution independently. In order to decouple the spatial and spectral resolution, we can use the spectroscopic devices to disperse the light fields after the modulator as many other snapshot spectral imaging systems [4, 5]. However, it limits the possibility of optimizing the light field fluctuations of different wavelengths according to the imaging scene [22–25]. This paper presents an improved system utilizing a flat-field grating [26, 27] to spatially disperse the light fields of different wavelengths at first. Then the dispersed light field changes from thermal light into a spatially fluctuating pseudo-thermal light through a spatial random phase modulator [17, 18] to generate uncorrelated speckles of different wavelengths and positions and is then recorded by charge-coupled device (CCD) in a single exposure. Except possessing the advantages of the previous GISC spectral camera, the new system can respectively regulate the spatial and spectral resolution through a spatial random phase modulator and a flat-field grating and has a much improved spectral resolution. 2. 2.1.

Theory Schematic and system model

The basic schematic of the GISC hyperspectral camera based on a flat-field grating is shown in Fig. 1. The system is composed of four modules, (1) imaging module (an objective lens), which projects a scene onto the first imaging plane ‘b’, (2) dispersion module (a flat-field grating), which spatially disperses light fields at different wavelengths and images them onto plane ‘c’, (3) modulation module (a spatial random phase modulator, microscope objective and CCD), which modulates the light fields of different wavelengths and positions to generate the uncorrelated speckles and then magnifies the speckles in the plane ‘d’ to be recorded by CCD, (4) demodulation module, which recovers a target image via certain optimization algorithm. Denoting, Ib (x0, y0, λ) and Id (x2, y2 ), as the light intensity distribution of a monochromatic point source in the first imaging plane ‘b’ and the whole light intensity distribution with all the wavelengths in the plane ‘d’, we have ∭ Id (x2, y2 ) = Ib (x0, y0, λ) h I (x2, y2 ; x0, y0, λ) dx0 dy0 dλ, (1) where h I (x2, y2 ; x0, y0, λ) is the incoherent intensity impulse response function of the system from plane ‘b’ to plane ‘d’, (x0, y0 ) and (x2, y2 ) are respectively the coordinates of the plane ‘b’ and ‘d’, λ is the wavelength of the light field. To realize one-armed ghost imaging through calibrating the spatial intensity fluctuation of the pre-determined reference arm, monochromatic point sources at different pixels and wavelengths are used in the first imaging plane ‘b’ to acquire the incoherent intensity impulse response functions of the system before the imaging process. Therefore, a coherent monochromatic point source at pixel (x0 0, y0 0) with wavelength λ 0, expressed as Ibr (x0, y0, λ; x0 0, y0 0, λ 0) = δ (x0 − x0 0, y0 − y0 0, λ − λ 0), is applied to illuminate the flat-field grating and spatial random

Vol. 26, No. 13 | 25 Jun 2018 | OPTICS EXPRESS 17708

Fig. 1. Schematic of GISC hyperspectral camera based on a flat-field grating. (a) The object plane; (b) the first imaging plane; (c) the diffractive plane; (d) the speckles plane; (1) an objective lens; (2) a flat-field grating; (3) a spatial random phase modulator; (4) a microscope objective; (5) a Charge Coupled Device (CCD).

phase modulator. Then the light intensity in the plane ‘d’, Idr (x2, y2 ; x0 0, y0 0, λ 0), is described as ∭ Idr (x2, y2 ; x0 0, y0 0, λ 0) = Ibr (x0, y0, λ; x0 0, y0 0, λ 0) h I (x2, y2, ; x0, y0, λ) dx0 dy0 dλ ∭ (2) = δ (x0 − x0 0, y0 − y0 0, λ − λ 0) h I (x2, y2, ; x0, y0, λ) dx0 dy0 dλ = h I (x2, y2 ; x0 0, y0 0, λ 0) . During the imaging process, the target image Ti (xi, yi, λ) in the plane ‘a’ is projected onto the first imaging plane ‘b’ where the light intensity may be denoted as T0 (x0, y0, λ). Then according to Eqs. (1) and (2), the light intensity fluctuation Idt (x2, y2 ) in the speckle plane ‘d’ is given by ∭ Idt (x2, y2 ) = T0 (x0, y0, λ) Idr (x2, y2 ; x0, y0, λ) dx0 dy0 dλ, (3) which shows that Idt (x2, y2 ) is a weighted integration of the pre-determined reference arm intensity distribution Idr (x2, y2 ; x0, y0, λ). Then the second-order correlation function of the spatial fluctuation between the pre-determined reference arm and test arm can be expressed as [28]  G(2) (x2, y2 )dr , (x2, y2 )dt D E (4) = Ed∗r (x2, y2 ; x0 0, y0 0, λ 0)Ed∗t (x2, y2 )Edt (x2, y2 )Edr (x2, y2 ; x0 0, y0 0, λ 0) , where h...i is the ensemble average of the light intensity distribution in the plane ‘d’.

Vol. 26, No. 13 | 25 Jun 2018 | OPTICS EXPRESS 17709

Substituting Eq. (3) into Eq. (4), we have  G(2) (x2, y2 )dr , (x2, y2 )dt   ∭ = Idr (x2, y2 ; x0 0, y0 0, λ 0) T (x0, y0, λ) Idr (x2, y2 ; x0, y0, λ) dx0 dy0 dλ ∭ = T (x0, y0, λ) G(2) (x0 0, y0 0, λ 0; x0, y0, λ) dx0 dy0 dλ, dr

(5)

with  (2) G d x0 0, y0 0, λ 0 ; x0, y0, λ r D  E (6)  = Ed∗r x2, y2 ; x0 0, y0 0, λ 0 Ed∗r (x2, y2 ; x0, y0, λ) Edr (x2, y2 ; x0, y0, λ) Edr x2, y2 ; x0 0, y0 0, λ 0 ,

which is the second-order correlation function of the spatial intensity fluctuation at the different pixels with different wavelengths of the first imaging plane ‘b’. Supposing that the light field in the speckle plane ‘d’ is satisfied with a complex-valued circular Gaussian random distribution, then according to the Gauss moments theorem, Eq. (6) can be rewritten as G(2) (x0 0, y0 0, λ 0; x0, y0, λ) = dr



 Idr (x2, y2 ; x0 0, y0 0, λ 0) Idr (x2, y2 ; x0, y0, λ)   × 1 + gd(2)r (x0 0, y0 0, λ 0; x0, y0, λ) ,

(7)

where gd(2)r (x0 0, y0 0, λ 0; x0, y0, λ) represents the normalized second-order correlation function: gd(2)r (x0 0, y0 0, λ 0; x0, y0, λ) =

with

I dr

|J (x0 0, y0 0, λ 0; x0, y0, λ)| 2

, (x2, y2 ; x0 0, y0 0 λ 0) Idr (x2, y2 ; x0, y0, λ)

D E J (x0 0, y0 0, λ 0; x0, y0, λ) = Ed∗r (x2, y2 ; x0 0, y0 0, λ 0) Edr (x2, y2 ; x0, y0, λ) .

(8)

(9)

In order to calculate the normalized second-order correlation function gd(2)r , we assume the flat-field grating is big enough and the groove profile is rectangular, and therefore the transmission function of the grating can be expressed as [29]    x  x 1 tg (x, y) = rect e jφ ⊗ comb , (10) a de f f de f f where a and de f f are respectively the slit width and effective grating constant, and ⊗ denotes the operation of convolution. The transmission function and the height auto-correlation function of the spatial random phase modulator [16], at the same time, are given by   h (xm, ym ) t p (xm, ym, λ) = exp j2π (n − 1) , (11) λ " # (xm − xm 0)2 + (ym − ym 0)2 0 0 0 0 2 Rh (xm, ym ; xm , ym ) = hh (xm, ym )h(xm , ym )i = ω exp − ζ2 , (12) where h (xm, ym ), h (xm 0, ym 0) denote the height functions at different coordinates in the plane of the modulator, and ω, ζ are respectively the height standard deviation of the modulator and the lateral correlation length of the modulator.

Vol. 26, No. 13 | 25 Jun 2018 | OPTICS EXPRESS 17710

Fig. 2. Schematic diagram of coordinate conversion relations.

According to the flat-field grating diffraction theory which combines the characteristics of concave mirrors with gratings, and under Fresnel diffraction theorem, the light field in the diffractive plane ‘d’ which is from a monochromatic point source in the first plane at the pixel (x0 0, y0 0) with the wavelength λ 0 can be computed as ! 2 2 exp [ j k (z0 + z1 )] 2π x0 + y0 0 0 0 jφ aλz1 E dr (x2, y2 ; x0 , y0 , λ ) = e exp j 0 jλ 0 z0 z1 λ 2z0 −λ 02 z2 z3 !     ∞ a λ 0 z1 2π x 2 + y12 1 Õ sinc 0 fx δ fx − n × exp j 0 1 λ 2z1 λ z1 de f f n=−∞ de f f ( " #) ∬ (13) 2π (X2 − X1 )2 + (y2 − y1 )2 0 × exp j 0 (z2 + z3 ) + t p (X m, ym, λ ) λ 2 (z2 + z3 ) ( " 2  2# ) π z2 + z3 z3 X1 + z2 X2 z3 y1 + z2 y2 × exp j 0 Xm − + ym − dX m dym, λ z2 z3 z2 + z3 z2 + z3 where Xk = (rH sinβH + xk ) secβH , k = 1, 2, m, describes the x-axis coordinate transformation re lation from the cross section of the flat-field grating to the diffractive plane ‘c’, fx = x1 + x00 z1 /z0 . rH and βH , as shown in Fig. 2, are respectively the distance from the diffractive plane to the centre of the grating and the included angle between the diffractive plane and the cross section of the grating. Furthermore, the first order diffraction of the grating is selected during imaging, leading to n = 1 in Eq. (12). Substituting Eq. (13) into Eq. (8) yields ⨌

(z2 + z3 )4 0 0 0 g (2) , y , λ; x , y , λ = × t p (X m, ym, λ) t ∗ (X m 0, ym 0, λ 0) (x ) 0 0 0 0 dr 4 4 4 λ z2 z3 (14) 2     1   jπ (z2 + z3 ) 1 2 02 2 02 0 0 × exp α +α − 0 β +β dX m dym dX m dym , z2 z3 λ λ where α = X m − ym 0 −

z3 X1 + z2 X2 0 z3 y1 + z2 y2 z3 X1 0 + z2 X2 , α = ym − , β = Xm 0 − and β 0 = z2 + z3 z2 + z3 z2 + z3

z3 y1 0 + z2 y2 . z2 + z3

Vol. 26, No. 13 | 25 Jun 2018 | OPTICS EXPRESS 17711

From Eq. (11), t p (X m, ym, λ) t p ∗ (X m 0, ym 0, λ 0) can be expressed as     

h (X m, ym ) h (X m 0, ym 0) ∗ 0 0 0 − t p (X m, ym, λ) t p (X m , ym , λ ) = exp j2π (n − 1) λ λ0   (15) 2π (n − 1) 2π (n − 1) e 0 0 = MH(Xm,ym )H(Xm ,ym ) ,− , λ λ0 eH(Xm,ym )H(Xm 0,ym 0 ) is characteristic function with respect to the height functions where M H (X m, ym ) and H (X m 0, ym 0). Since the surface characteristics of the spatial phase modulator satisfy the Gaussian distribution, the characteristic function in Eq. (15) can be expressed as   2π (n − 1) 2π (n − 1) e ,− MH(Xm,ym )H(Xm 0,ym 0 ) λ λ0     (16) 1 2Rh (X m, ym, X m 0, ym 0) 1 1 2 2 = exp − [2π (n − 1)] ω − . + 2 λλ 0 λ2 λ 02 Substituting Eq. (16) into Eq. (14), the normalized second-order correlation function gd(2)r (x0, y0, λ; x0 0, y0 0, λ 0) is given by ( " #) 2 1 1 2 (2) 2 0 0 0 gdr (x0, y0, λ; x0 , y0 , λ ) ≈ exp −[2πω (n − 1)] − + 0 λ λ0 λλ "  #2    x0 − x00    (λ − λ 0) 2   © 2 y − y0 2   ª®  − z sec β − − z (z ) 3 1 H 0  ­ 3 0    2 d z ®    ­ 2 [2πω (n − 1)] ef f 0 ®. exp × exp ­­ ® 2 2   λλ 0 + z ζ (z )   ­ ® 2 3     ­ ®       « ¬  (17) Then according to Eqs. (5), (7) and (17), the correlation function of intensity fluctuations ∆G(2) (x2, y2 )dr , (x2, y2 )dt is denoted as follows  ∆G(2) (x2, y2 )dr , (x2, y2 )dt



=G(2) (x2, y2 )dr , (x2, y2 )dt − Idr (x2, y2 ; x0 0, y0 0, λ 0) Idt (x2, y2 ) 4   n k 0 2 a4 a 0 0 0 ≈ T , y , k ⊗ exp 2[2πω(n − 1)]2 sinc (x ) 0 0 de f f (z2 + z3 )4 z0 4 de f f 4 " # (18) 2 z3 2 z1 2 (sec βH ) 02 z3 2 y0 02 x0 − × exp − (z2 + z3 )2 ζ 2 z0 2 (z2 + z3 )2 ζ 2 " # ) 2 2z3 2 z1 2 (sec βH ) 2 02 02 0 0 × k − k x0 − 3[2πω(n − 1)] k . (z2 + z3 )2 ζ 2 de f f z0 1 1 where k = , k 0 = 0 , ⊗ represents the operation of convolution. According to Eq. (18), the λ λ 0 0 0 image of the target T (x0 , y0 , k ) can be separated from the correlation function of intensity fluctuation ∆G(2) (x2, y2 )dr , (x2, y2 )dt as explained in ghost imaging theory. Moreover, the spatial and spectral resolution of the system are respectively defined as the wavelength difference and spatial distance which are determined by the normalized second-order correlation function gdr (2) (x0, y0, λ; x0 0, y0 0, λ 0), hence gdr (2) represents the resolution of the system. Comparing with the previous system [16], gdr (2) in Eq. (17) is related to βH , de f f of the flat-field grating in the spectral dimension, which makes it possible to optimize the spectral resolution by adjusting these parameters.

Vol. 26, No. 13 | 25 Jun 2018 | OPTICS EXPRESS 17712

Fig. 3. The simple sketches of the calibration. (a) Point sources with different wavelengths are at the same pixel of field-of-view (FoV) and illuminate the flat-field grating, and then are spatially dispersed and imaged to the diffractive plane at different places with a certain adjacent distance determined by the dispersion coefficient. (b) Point sources with the same wavelength are at different pixels of FoV and illuminate the flat-field grating, and then are spatially dispersed and imaged to the diffraction plane at different places with a certain adjacent distance determined by the distance between two pixels.

2.2.

The matrix form & the reconstruction algorithm

Equation (18) describes the reconstruction of the image using correlation algorithm as in ghost imaging and is based on ensemble statistics of the light field. Since the light field is approximatively an ergodic random process, the ensemble average in space or time domain is equivalent. In GISC hyperspectral camera, each pixel of CCD represents a bucket detector in the test arm, which means that the correlation detection between the reference arm and the test arm in the time domain can be considered as equivalent to the space domain. Hence, the ensemble average in Eq. (18) can be replaced by the spatial average of different pixels (x2, y2 ) of the CCD. At the same time, the signal sampling mode of ghost imaging is consistent with the CS theory, and therefore it is also possible to reconstruct the image using CS techniques. Under the framework of CS theory, the discrete model of Eq. (3) is given by Y = AX,

(19)

where A is the measurement matrix, which can be obtained by calibrating the pre-determined reference arm. During the calibration, suppose that the whole spectral band is divided into L equispaced spectral channels and the field-of-view (FoV) is divided into N pixels. Assuming that all monochromatic point sources in FoV are incoherent with each other, the calibration of incoherent intensity impulse response functions for a point source at different pixels with different wavelengths in the FoV is shown in Fig. 3. Thereby, the whole calibrating measurement can be regarded as acquiring incoherent intensity impulse response functions from different 3D spectral

Vol. 26, No. 13 | 25 Jun 2018 | OPTICS EXPRESS 17713

data-cube locations. We denote mth speckle intensity with wavelength λ k recorded by CCD as  T k k k Im (M = l × n pixels), and then reshape it into a column vector Aλmk = Aλ1,m , Aλ2,m , · · · , AλM,m . After L × N measurements, the whole measurement matrix A (M × (N × L)) can be obtained as follows Aλ1 © 1,1 ­ Aλ1 ­ 2,1 A=­ . ­ .. ­ λ1 « AM,1

··· ··· ··· ···

Aλ1,1N Aλ2,1N .. . λ1 AM, N

Aλ1,12 Aλ2,12 .. . λ2 AM,1

··· ··· ··· ···

Aλ1,2N Aλ2,2N .. . λ2 AM, N

··· ···

··· ···

··· ···

··· ···

Aλ1,1L Aλ2,1L .. . λL AM,1

··· ··· ··· ···

Aλ1,LN Aλ2,LN .. . λL AM, N

ª ® ® ®. ® ® ¬

(20)

During the imaging process, denote the unknown object image as a column vector X L×N = (x1, x2, · · · , x L×N )T , and the object intensity distribution recorded by the same CCD as It (M = l × n pixels) which is then reshape into YM = (y1, y2, · · · , y M )T . We may express this measurement process in matrix form as: yp =

L Õ N Õ

Aλp,q xqλ,

(21)

λ=1 q=1

where p denotes the pth spatial location in the detector. To reconstruct the spectral 3D object from the detected 2D signal, Eq. (19) must be solved. The image of spectral object has the characteristics of spatial and spectral correlation, whose reconstruction can be supposed to solve a minimization problem about l 1 regularized nonnegative constrain. In our camera, an efficient TV-RANK algorithm is applied [30], X = arg min kY − AX k22 + µ1 kΦX k1 + µ2 k X k∗, X ≥0

(22)

where k X k∗ is nuclear norm, Φ is the sparsity transform, µ1 , µ2 > 0 are the weight coefficients. In this work, we choose Φ to be the difference operator, then kΦX k1 to be the spatial total variation (TV). 3. 3.1.

Experimental results Experimental setup

Figure 4 depicts an experimental setup of GISC hyperspectral camera based on a flat-field grating, which adds a flat-field grating in front of the spatial random phase modulator to spatially disperse the light fields with different wavelengths. The objective lens with focal length of f = 150 mm projects the unknown target image onto the first imaging plane. A 536-545 nm band pass filter located behind the objective lens (Tamron AF70-300 mm f/4-5.6) ensures that only the spectral data-cube corresponding to 536-545 nm band is measured by the system. A beam splitter with 1:1 splittering ratio in front of the first imaging plane splits the light field into two paths, and a surveillance camera CCD1(AVT Sting F-504C owning 3.45µm × 3.45µm) records the conventional image of one path as a reference. A flat-field grating (CELO GF106 with focal length of f = 70 mm, dispersion coefficient of D(λ) = 30 nm/mm) disperses and images the light fields with different wavelengths onto the diffractive surface, and a spatial random phase modulator (SIGMAKOKI, DFSQ1-30C02-1000) modulates the light fields of different wavelengths to generate uncorrelated speckles. Then CCD2 (Apogee with 13µm × 13µm) records the intensity distribution of speckles amplified by a microscopic objective whose magnification is β = 10 in a single exposure. Calibration for the measurement matrix is done by calibration setup as shown in the blue box of Fig. 4. A xenon lamp produces thermal light to illuminate the entrance of monochromator

Vol. 26, No. 13 | 25 Jun 2018 | OPTICS EXPRESS 17714

Fig. 4. The experimental setup of GISC hyperspectral camera based on a flat-field grating. During the calibration, a calibration setup is put in front of the objective lens to acquire incoherent intensity impulse response functions of the system. After the calibration, an object is put before the objective lens to obtain the detected signal which is the overlay of the speckles from different pixels and wavelengths of the object.

(WDG30-Z) which generates quasi-monochromatic light of different wavelengths. Then the quasi-monochromatic light is coupled into an optical fiber with a diameter of 20 µm to form a quasi-monochromatic point light source. The output end of the optical fiber is put in the equivalent plane which locates at the focal plane of a collimating lens (Olympus M.ZUIKO AF40-150 mm). The objective lens of the system collects the parallel light from the collimating lens. During the calibration, the first imaging plane is divided to 113 × 113 pixels, which is determined by the spatial resolution in Eq. (14), and the number of spectral channels is 10, which is from 536 nm to 545 nm at interval of 1 nm. 3.2.

Imaging results

After obtaining the measurement matrix, the normalized second-order correlation function in Eq. (14) is calculated according to the experimental data. Figure 5(a) shows the comparison of gd(2)r between theoretical and experimental results. In order to verify the spectral resolution of GISC hyperspectral camera based on a flat-field grating displayed in Eq. (14), as shown in Fig. 5(b), two point light sources generated by optical fibers with diameters of 20 µm at central wavelengths of 539 nm and 540 nm were respectively

Vol. 26, No. 13 | 25 Jun 2018 | OPTICS EXPRESS 17715

Fig. 5. Experimentally determined resolution. (a) Two normalized second-order correlation functions of light fields at a pixel in FoV with two different wavelengths, (2) gd (x0 0, y0 0, λ; x0 0, y0 0, λ 0 ), and at two different pixels with the same wavelength, r

(2)

gd (x0, y0, λ 0 ; x0 0, y0 0, λ 0 ), whose half width respectively determine the spectral resor lution (1nm) and spacial resolution (50µm). The experiment result (the blue line) is consist with the theoretical result (red dotted line). (b) Tow point light sources with two different wavelengths, 539 nm and 540 nm, whose distance is 40µm. (c) The analysis of spectral and spatial resolution (according to the reconstructed image of two points). The 3D schematic about the spectral and spatial distribution (upper). The red line and green line (lower left) are respectively the spectral distributions about two points with 539nm and 540nm center wavelength and the red line (lower right) is the spatial distributions about two points. The spectrum and position of the points are considered resolved if they are separated by a dip of at least 20%. The result verifies the resolution determined by the normalized second-order (2) correlation function gd . r

used, and the distance of them was 40 µm. The modulated target intensity distribution Y was detected by CCD2 in the system. According to the reconstructed result, we chose the middle row of the reconstructed image and arranged it based on the different wavelengths shown in Fig. 5(c) which further illustrates that the spectral resolution of GISC hyperspectral camera based on a flat-field is consistent with the theoretical result shown in Fig. 5(a). After the calibration process, the calibration setup, as shown in the blue box of Fig. 4, is replaced by a real object to perform imaging experiments. Our experiment used objects of transmission-institute logo & a colored toy illuminated with thermal light as displayed in Fig.

Vol. 26, No. 13 | 25 Jun 2018 | OPTICS EXPRESS 17716

Fig. 6. Experimentally imaging result. (a) A little girl taken by conventional camera. (b,c) A little girl & institute logo passing through a 536-545 nm narrowed band pass filter detected by CCD1. (d) The reconstructed spectral images of institute logo & little girl of spectral 3D data-cube, displaying all the channels from 536 to 545 nm.

6 where Fig. 6(a) is acquired by a conventional camera and Figs. 6(b) and 6(c) are detected by CCD1 of the system. Figure 6(d) shows the reconstructed images by TV-RANK algorithm. In the reconstruction process, the sampling rate of 3D date-cube is 60%, and the number of iterations in the algorithm is 160. Referring to the image in Figs. 6(a)-6(c), the reconstructed image is accurately consistent with the actual object, which verifies the application value of GISC hyperspectral camera based on a flat-field grating. 4.

Conclusion

In conclusion, we demonstrated a new optical system of GISC spactral camera with a flat-field grating in front of a spatial random phase modulator. The flat-field grating spatially disperses light fields, which enables the position of modulator illuminated by the light fields with different wavelengths to be translated by a certain distance, hence decoupling the spectral resolution from the spatial resolution and improving the spectral resolution. We theoretically and experimentally demonstrated a spectral resolution of about 1 nm and completed the imaging experiments on spectral objects at the same time. The new system provides a basis for optimizing the measurement matrix of different wavelengths and facilitating the achromatic imaging by designing the spatial random phase modulator according to the imaging scene to improve the reconstruction qualities of images in the future. As a new optical imaging system, GISC hyperspectral camera based on a flat-field grating has the potential to be applied in ultra-fast measurement [3], atmospheric remote sensing imaging [31, 32], biological microscopy imaging [33]. Funding Hi-Tech Research (2013AA122902); Development Program of China (2013AA122901).