RGB-NIR multispectral camera

7 downloads 88928 Views 1MB Size Report
52(2), 541–546 (2006). 20. J. Chiang, “Gray World Assumption,” (1999). http://scien.stanford.edu/pages/labsite/1999/psych221/projects/99/jchiang/intro2.html.
RGB-NIR multispectral camera Zhenyue Chen,1,2 Xia Wang,2 and Rongguang Liang1,* 1

College of Optical Sciences, University of Arizona, Tucson, Arizona 85721, USA Beijing Institute of Technology, Key Laboratory of Photoelectronic Imaging Technology and System of Ministry of Education of China, School of Optoelectronics, Beijing 100081, China * [email protected]

2

Abstract: A multispectral imaging technique with a new CMOS camera is proposed. With a four channel Bayer patterns, the camera can acquire four spectral images simultaneously. We have developed a color correction process to obtain accurate color information, and we have also demonstrated its applications on portrait enhancement, shadow removal, and vein enhancement. ©2014 Optical Society of America OCIS codes: (110.4234) Multispectral and hyperspectral imaging; (100.2980) Image enhancement; (110.0110) Imaging systems.

References and links 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22.

A. R. Harvey, J. E. Beale, A. H. Greenaway, T. J. Hanlon, and J. W. Williams, “Technology options for hyperspectral imaging,” Proc. SPIE 4132, 13–24 (2000). L. L. Thompson, “Remote sensing using solid-state array technology,” Photogramm. Eng. 45, 47–55 (1979). J. Y. Hardeberg, F. Schmitt, and H. Brettel, “Multispectral color image capture using a liquid crystal tunable filter,” Opt. Eng. 41(10), 2532–2548 (2002). A. R. Harvey and D. W. Fletcher-Holmes, “Birefringent Fourier-transform imaging spectrometer,” Opt. Express 12(22), 5368–5374 (2004). T. Zimmermann, J. Rietdorf, and R. Pepperkok, “Spectral imaging and its applications in live cell microscopy,” FEBS Lett. 546(1), 87–92 (2003). M. Rast, J. L. Bezy, and S. Bruzzi, “The ESA Medium Resolution Imaging Spectrometer MERIS a review of the instrument and its mission,” Int. J. Remote Sens. 20(9), 1681–1702 (1999). Y. R. Chen, K. Chao, and M. S. Kim, “Machine vision technology for agricultural applications,” Comput. Electron. Agric. 36(2–3), 173–191 (2002). D. L. Farkas and D. Becker, “Applications of spectral imaging: detection and analysis of human melanoma and its precursors,” Pigment Cell Res. 14(1), 2–8 (2001). L. Weitzel, A. Krabbe, H. Kroker, N. Thatte, L. E. Tacconi-Garman, M. Cameron, and R. Genzel, “3D: The next generation near-infrared imaging spectrometer,” Astron. Astrophys. Suppl. Ser. 119(3), 531–546 (1996). D. W. Fletcher-Holmes and A. R. Harvey, “Spectral imaging with a hyperspectral fovea,” J. Opt. A, Pure Appl. Opt. 7(6), S298–S302 (2005). A. A. Wagadarikar, N. P. Pitsianis, X. Sun, and D. J. Brady, “Video rate spectral imaging using a coded aperture snapshot spectral imager,” Opt. Express 17(8), 6368–6388 (2009). A. Gorman, D. W. Fletcher-Holmes, and A. R. Harvey, “Generalization of the Lyot filter and its application to snapshot spectral imaging,” Opt. Express 18(6), 5602–5608 (2010). M. W. Kudenov and E. L. Dereniak, “Compact real-time birefringent imaging spectrometer,” Opt. Express 20(16), 17973–17986 (2012). L. Gao, R. T. Kester, and T. S. Tkaczyk, “Compact Image Slicing Spectrometer (ISS) for hyperspectral fluorescence microscopy,” Opt. Express 17(15), 12293–12308 (2009). L. Kong, S. Sprigle, D. Yi, F. Wang, C. Wang, and F. Liu, “Developing handheld real time multispectral imager to clinically detect erythema in darkly pigmented skin,” Proc. SPIE 7557, 75570G (2010). http://www.jai.com/en/products/lq-200cl J. Zhang, S. Lin, C. Zhang, Y. Chen, L. Kong, and F. Chen, “An Evaluation Method of a Micro-arrayed Multispectral Filter Mosaic,” Proc. SPIE 8759, 875908 (2013). Y. Lu, C. Fredembach, M. Vetterli, and S. Susstrunk, “Designing color filter arrays for the joint capture of visible and near-infrared images,” in Proceedings of IEEE Conference on Image Processing (ICIP) (Institute of Electrical and Electronics Engineers, Cairo, 2009), pp. 3797–3800. J. Huo, Y. Chang, J. Wang, and X. Wei, “Robust Automatic White Balance Algorithm using Gray Color Points in Images,” IEEE Trans. Consum. Electron. 52(2), 541–546 (2006). J. Chiang, “Gray World Assumption,” (1999). http://scien.stanford.edu/pages/labsite/1999/psych221/projects/99/jchiang/intro2.html. B. Lindbloom, “Chromatic Adaptation,” (2009). http://www.brucelindbloom.com/?Eqn_ChromAdapt.html. R. S. Berns, Billmeyer and Saltzman's Principles of Color Technology (Wiley-Interscience, 2000).

#205096 - $15.00 USD (C) 2014 OSA

Received 22 Jan 2014; revised 12 Feb 2014; accepted 15 Feb 2014; published 24 Feb 2014 10 March 2014 | Vol. 22, No. 5 | DOI:10.1364/OE.22.004985 | OPTICS EXPRESS 4985

23. V. Cheung, S. Westland, D. Connah, and C. Ripamonti, “A comparative study of the characterisation of colour cameras by means of neural networks and polynomial transforms,” Coloration Technology. 120(1), 19–25 (2004). 24. B. Lindbloom, “RGB/XYZ Matrices,” (2012). http://www.brucelindbloom.com/index.html?Eqn_RGB_to_XYZ.html. 25. C. Fredembach, N. Barbusica, and S. Susstrunk, “Combining visible and near-infrared images for realistic skin smoothing,” In Proc. IS&T/SID 17th Color Imaging Conference, 2009. 26. V. C. Paquit, K. W. Tobin, J. R. Price, and F. Mèriaudeau, “3D and multispectral imaging for subcutaneous veins detection,” Opt. Express 17(14), 11360–11365 (2009). 27. L. Wang, G. Leedham, and D. Cho, “Minutiae feature analysis for infrared hand vein pattern biometrics,” Pattern Recognit. 41(3), 920–929 (2008).

1. Introduction Multispectral imaging is a technology originally developed for space-based imaging. Now it has been successfully used in many fields such as environmental observation, defense and security, biomedical applications and so on. Traditional spectral imaging techniques involve time-sequential scanning of either a spatial or a spectral dimension combined with snapshot imaging of the other two dimensions [1]. These methods are exemplified by a push-broom scan of a one-dimensional spectral imager across the required field of view [2], the use of tunable spectral filtering [3] or imaging Fourier-transform spectrometry [4]. The applications of these traditional time sequential techniques are restricted to the applications where an extended recording time is acceptable, for example in microscopy [5], remote sensing [6, 7], and biomedical imaging [8]. Recently, several snapshot multispectral imaging techniques have been developed. Some snapshot techniques use bulk-optic [9] or fiber-optic [10] to reformat a two-dimensional image into a one-dimensional array and then use a conventional one-dimensional imaging spectrometer to obtain spectral information. The computed tomographic imaging spectrometer (CTIS) uses a diffractive optical element to disperse the image at the detector and then reconstructs the spectral data cube [11]. The image-replicating imaging spectrometer (IRIS) employs a Lyot filter to spectrally demultiplex an image onto a single conventional detector array [12]. The snapshot hyperspectral imaging Fourier transform (SHIFT) spectrometer uses birefringent polarization optics to obtain images at different spectrum [13]. Another newly developed snapshot hyperspectral imaging method uses an image slicer to spectrally redirect the image to different locations in the detector [14]. While those multispectral imaging techniques provide more detailed spectral information, the systems usually are complex. For some applications, only few spectral images (for example, blue, green, red, and near infrared (NIR)) are sufficient. For this type of applications, the typical configurations include (1) one monochromatic camera with filter wheel and (2) two or more cameras with dichoric beamsplitters [15, 16]. For instance, JAI Corp. has developed a prism-based 4-CCD line scan camera which can simultaneously capture R, G, B and NIR images. A new approach is to develop custom sensor with nonconventional color filter array [17, 18]. We have developed a multispectral imaging technique using a new CMOS sensor with four channel Bayer pattern. Although the proposed method only achieves a low spectral resolution, it has high spatial sampling. It can be used in the applications that require only several spectral bands in the visible and NIR wavelengths. In Section 2, the new RGB-NIR multispectral camera is introduced. In Section 3, we will discuss a complete pipeline of the color correction to get accurate color information. Three applications, namely portrait enhancement, shadow removal, and vein enhancement, will be discussed in Section4. 2. RGB-NIR camera Photodetector in both CCD and CMOS has a wide spectral sensitivity range, from UV to NIR. For the sensor with conventional red, green, and blue color filter array (CFA), the NIR light often degrades image quality, therefore a window with low pass coating is often used to block the NIR light. Even the sensor window doesn’t block NIR information, it is impossible to extract NIR information from conventional sensor with RGB CFA. In order to extract both

#205096 - $15.00 USD (C) 2014 OSA

Received 22 Jan 2014; revised 12 Feb 2014; accepted 15 Feb 2014; published 24 Feb 2014 10 March 2014 | Vol. 22, No. 5 | DOI:10.1364/OE.22.004985 | OPTICS EXPRESS 4986

visible and NIR information from a single sensor, a new type of sensor with four color filter arrays has been developed. Figure 1(a) is the layout of the new CFA, and Fig. 1(b) is the typical spectral sensitivity. As shown in Fig. 1(b), there are cross-talks between each channel, therefore new color separation algorithms will be needed to extract NIR, red, green, and blue information from a single image

Fig. 1. (a) Color filter arrays with four bandpass filters and (b) Spectral sensitivity of the color filter arrays

3. Color correction and spectral data extraction 3.1 Experimental setup The experiment is carried out in the lab with fluorescent lamps in the ceiling. Since there is no NIR component in the spectrum of fluorescent lamps, a high power NIR LED at 850 nm with the FWHM of 40 nm is employed to provide NIR illumination. The NIR LED is placed at 1.5 m from the target for uniform illumination. A 25mm VIS-NIR compact fixed focal length lens from Edmund Optics is used to image the target to the RGB-NIR sensor; this lens has good transmission in visible and NIR bands. The work distance between the camera and the target is 1.5 m. The experimental setup is shown in Fig. 2.

Fig. 2. The experimental setup

3.2 Color correction In order to obtain accurate color information, a color correction process has been developed. The standard X-Rite ColorChecker is used as the calibration target. The ColorChecker target is designed to deliver true-to-life image reproduction so photographers can predict and control how color will look under various illuminations. Each of the 24 color patches represents the actual color of natural objects and reflects light just like its real-world counterpart. Figure 3 is the raw image from the RGB-NIR sensor; the color is quite different from the standard RGB camera with the optimized color correction process. Therefore, color correction is needed to obtain accurate color information from this sensor. In our work, the four images, R, G, B and

#205096 - $15.00 USD (C) 2014 OSA

Received 22 Jan 2014; revised 12 Feb 2014; accepted 15 Feb 2014; published 24 Feb 2014 10 March 2014 | Vol. 22, No. 5 | DOI:10.1364/OE.22.004985 | OPTICS EXPRESS 4987

NIR, are extracted from the corresponding channels in the raw data and we do not perform data interpolation.

Fig. 3. Raw RGB image from RGB-NIR camera

Fig. 4. Pipeline of color correction

The pipeline of the color correction in this paper is shown in Fig. 4. Automatic White Balance (AWB) is one of the most important functions for cameras to achieve high quality images. Existing AWB algorithms can be classified into two categories: global AWB algorithms and local AWB algorithms [19]. The standard ColorChecker we use has six gray patches which could be used for the white balance correction. In this study, we use the simple global AWB algorithm, also known as gray world algorithm [20]. First, color values are estimated using the mean RGB values across the six gray patches of the ColorChecker image and then converted to CIE1931XYZ color space. The XYZ values are used to calculate xychromaticity and then are converted back to XYZ space and Y is normalized to 100. After that a chromatic adaption is carried out to reach the target D65 canonical illuminant shown as Eq. (1) [21].

XD Y   M  D  Z D 

XS  Y ,  S  Z S 

(1)

where [ X S , YS , Z S ]T represents a source color and [ X D , YD , Z D ]T represents a destination color. M is a linear transformation matrix which depends on the source reference white and the destination reference white. To obtain M , XYZ values are transformed into a cone response domain [  ,  ,  ]T . After the vector components are scaled by the factors dependent upon both the reference white and destination white, they are transformed back to XYZ using the inverse transform as shown in Eq. (2),

M  MA

1

D S  0   0

0

0 0

D S 0

D

 M ,  A  S 

(2)

where M A is the matrix according to the definition of the cone response domain. In this paper, Von Kries method is chosen to set the value for M A [21].

#205096 - $15.00 USD (C) 2014 OSA

Received 22 Jan 2014; revised 12 Feb 2014; accepted 15 Feb 2014; published 24 Feb 2014 10 March 2014 | Vol. 22, No. 5 | DOI:10.1364/OE.22.004985 | OPTICS EXPRESS 4988

The reference white and destination white in the cone response domain can be calculated in Eqs. (3) and (4),

S   X WS     M  Y  , A  WS   S   S   ZWS 

(3)

 X WD  D     M  Y  , A  WD   D  ZWD    D 

(4)

where [ X WS , YWS , ZWS ]T and [ X WD , YWD , ZWD ]T are the source reference white and destination reference white, respectively. In the transformation, D65 canonical illuminant is chosen as the destination illumination condition and CIE1931 2° observer is chosen as the viewing condition. So we get XWD  95.047 , YWD  100.000 , ZWD  108.883 [22].

Fig. 5. Image after white balance correction

After the white balance correction and chromatic adaption, the ColorChecker image is shown in Fig. 5, this image still has some color errors. Therefore, the color calibration is necessary before we can extract correct spectral information from the sensor. There are three color calibration methods in digital imaging: physical models, look-up tables, and numerical methods. Look-up table and numerical method are often used in characterizing camera systems [23].In the physical model method, a relatively small number of color samples are required to derive the parameters of the model. However, the quality of the model-based characterization is determined by the extent to which the model reflects the real behavior of the device. Certain types of devices are not readily amenable in these cases. In the look-up table method, a large number of images with various RGB values and corresponding CIE XYZ values are needed. In this study, we use numerical method. First, a linear transform is employed to convert the RGB values to CIE XYZ values, then the color correction is applied in CIE XYZ space. In the end, CIE XYZ values are inverted back to RGB values with a linear invert transform. Since RGB-NIR camera works in sRGB color space, the transform matrices between sRGB and CIE XYZ are used and given in Eq. (5) and Eq. (6) [24].

0.412456 0.357576 0.180438   0.212673 0.715152 0.072175 ,   0.019334 0.119192 0.950304

(5)

 3.240454 1.537139 0.498531   0.969266 1.876011 0.0415560 .    0.055643 0.204026 1.057225 

(6)

M sRGB 2 XYZ

M XYZ 2 sRGB

#205096 - $15.00 USD (C) 2014 OSA

Received 22 Jan 2014; revised 12 Feb 2014; accepted 15 Feb 2014; published 24 Feb 2014 10 March 2014 | Vol. 22, No. 5 | DOI:10.1364/OE.22.004985 | OPTICS EXPRESS 4989

Before the transformation is applied, a gamma correction is performed so that the RGB outputs of the camera can be linearly related to the XYZ tristimulus values of the Colorchecker. During the experiment, the illumination is kept unchanged and automatic white balance and exposure is not applied [23]. The X-Rite Classic ColorChecker is used as the target in the calibration. Since the output RGB values  Rcamera

Gcamera

Bcamera  from the camera and the destination XYZ values of T

T

the ColorChecker  Xˆ Yˆ Zˆ  are known, a mapping color correction matrix (CCM) can be found from these two group values. The simplest transformation is a 3 × 3 matrix transformation as shown in Eq. (7),

 Xˆ       11  Yˆ     21  ˆ    Z   31

12  22  32

13   Rcamera   Rcamera   23  Gcamera   CCM Gcamera  .  Bcamera   33   Bcamera 

(7)

However, when the camera’s spectral sensitivities are very different from the CIE’s color matching functions, the estimated tristimulus values have considerable errors. A common technique to improve performance is to expand Eq. (7) to higher order [22]. Since higher order CCMs are quite sensitive to noise and the NIR channel in the camera tends to have a low SNR, only the first order NIR component is introduced. Several CCMs have been investigated in our experiments; the CCM with a dimension of 3 × 19 has the best performance for our sensor. The expanded mapping relationship is given in Eq. (8).

Vdestination

 Xˆ    12 ... 118 11     Yˆ     21  22 ...  218   ˆ    Z   31  32 ...  318

 R   G     B     NIR   R2   2  G   B2     RG  119   RB     219   GB   CCM 319  Vcamera ,  319   R 3   G3   3   B   R 2G   2  R B G 2 R   2  G B   B2 R   2   B G 

(8)

where R, G, B are the camera output values in sRGB color space. A color correction matrix may cause gray-balance error. This can be avoided by constraining the color correction matrix coefficients as shown in Eq. (9) [22]:

#205096 - $15.00 USD (C) 2014 OSA

Received 22 Jan 2014; revised 12 Feb 2014; accepted 15 Feb 2014; published 24 Feb 2014 10 March 2014 | Vol. 22, No. 5 | DOI:10.1364/OE.22.004985 | OPTICS EXPRESS 4990

11  12  13  X n  21   22   23  Yn  31   32   33  Z n 14  15  ...  119  0.0 ,  24   25  ...   219  0.0  34   35  ...   319  0.0

(9)

From Eq. (8) and Eq. (9), it is an over-determined system to solve the CCM, there are more equations than variables. The solution of Eq. (8) may be obtained by computing the pseudo-inverse of Vcamera . However, it can easily be performed using a programming language such as MATLAB, which provides the left matrix divide function to get the solution in the least squares method to the under- or over-determined system of equations. In MATLAB, the CCM is easily acquired with a formula in Eq. (10). (10) CCM  Vdestination / Vcamera . Once the CCM is acquired, the ColorChecker image can be reconstructed. The reconstructed image in Fig. 6(a) has noises due to the small fluctuation in signal and its sensitivity to the high order transform. In order to suppress the noise level, the maximum and minimum values of each patch are taken into consideration, as shown in Eq. (11),

Vcamera  Vcamera _ mean Vcamera _ min Vcamera _ max  . (11) The noise is reduced as shown in Fig. 6(b). Color error between the reconstructed ColorChecker and ideal ColorChecker are calculated in CIELAB and CIE94 color space [22] and the results are shown in Fig. 6(c), from which it can be seen that all the color errors after correction are smaller than 1.0 and thus acceptable.

Fig. 6. (a) Image after color correction, (b) Image after noise reduction, (c) Color error computation

#205096 - $15.00 USD (C) 2014 OSA

Received 22 Jan 2014; revised 12 Feb 2014; accepted 15 Feb 2014; published 24 Feb 2014 10 March 2014 | Vol. 22, No. 5 | DOI:10.1364/OE.22.004985 | OPTICS EXPRESS 4991

4. Applications The RGB-NIR camera can be used in many applications that require only several spectral bands in the visible and NIR, such as computer vision, digital surveillance, medical diagnosis, and so on. We have demonstrated its applications on portrait enhancement, shadow removal, and vein enhancement. 4.1 Portrait enhancement The appearance of human skin is wavelength dependent. NIR is less absorbed and less scattered than visible light, penetrating deeper into the skin layers. As a result, NIR skin images contain less skin surface details than normal RGB color images, NIR images can be integrated into the RGB color images to smooth skin appearance. Figure 7 is one example showing the potential application of the RGB-NIR sensor in portrait enhancement.

Fig. 7. Portrait enhancement method using NIR image. (a) Raw RGB image, (b) RGB image after color correction, (c) Enhanced RGB image with NIR component.

Due to the extended spectral sensitivity of R, G, B channels in the NIR, the skin in Fig. 7(b) is smoother compared with traditional RGB images. Figure 7(c) shows the enhanced portrait image with NIR component. The intensity I in HIS color space is reconstructed with the NIR component [25]. It is clear that small and less appealing details are attenuated in the enhanced images while all the high frequency details, such as hair and eyelash, are effectively preserved. 4.2 Shadow removal Under certain lighting conditions, the captured images may have unfavorable shadow. It is a challenge to remove the shadows especially when active white light illumination is not allowed. With an active NIR illumination and a RGB-NIR camera, we can effectively reconstruct the image without shadow. Figure 8(a) is the image acquired with white light illumination for halogen lamp from the left side; half of the face is in shadow. Figure 8(b) is the image acquired with an active NIR illumination at 850 nm. Using active NIR illumination and RGB-NIR camera, one can capture images or videos with acceptable quality under various illumination conditions. It will have great potential applications in surveillance systems.

#205096 - $15.00 USD (C) 2014 OSA

Received 22 Jan 2014; revised 12 Feb 2014; accepted 15 Feb 2014; published 24 Feb 2014 10 March 2014 | Vol. 22, No. 5 | DOI:10.1364/OE.22.004985 | OPTICS EXPRESS 4992

Fig. 8. Effect of using NIR to remove the shadow. (a) White light image, (b) NIR and white light image.

4.3 Vein enhancement Vein detection systems use imaging techniques to capture vein images, enhance the image contrast, and then display them on screen or back onto the skin [26, 27]. They assist clinicians to find the optimal venipuncture site and avoid potential stick complications. Compared to other vein enhancement methods, our approach with RGB-NIR camera and active 850 nm NIR illumination has the advantage that it can captured perfectly-aligned white light image and NIR images simultaneously. The vein information can be enhanced with the complementary information from white light image and NIR image. As an example, RGB image and NIR image are extracted from the raw image and are shown in Fig. 9(a) and Fig. 9 (b) respectively. Veins are clearly shown in both images; the visualization can be enhanced if two images are fused together. RGB color imaging in Fig. 9(c) is another example showing the vein is difficult to identify from RGB image only. With the input from NIR channel, the vein is clearly shown in the fused RGB and NIR image shown in Fig. 9(d). According to the gray value analysis, the vein contrast in Fig. 9(d) is almost double the contrast in Fig. 9(c).

Fig. 9. Vein image using RGB-NIR camera. (a) RGB image, (b) NIR image, (c) RGB image, (d) Enhanced vein image

#205096 - $15.00 USD (C) 2014 OSA

Received 22 Jan 2014; revised 12 Feb 2014; accepted 15 Feb 2014; published 24 Feb 2014 10 March 2014 | Vol. 22, No. 5 | DOI:10.1364/OE.22.004985 | OPTICS EXPRESS 4993

5. Conclusion In this paper, a multispectral imaging system based on a new RGB-NIR camera is introduced. A complete pipeline of the color correction has been developed to extract the correct color information. We have also demonstrated its applications in portrait enhancement, shadow removal, and vein enhancement. It also has great potential in other applications which require both the visual information and NIR information such as fruit and vegetable sorting, food inspection, web inspection, tiles inspection and so on. For future work, we will further improve the color correction algorithms, for instance, a look-up table will be established to obtain more accurate color information. Acknowledgments This work is supported by a grant of the China Scholarship Council. We also thank Rui Wang for his assistance in the experiments.

#205096 - $15.00 USD (C) 2014 OSA

Received 22 Jan 2014; revised 12 Feb 2014; accepted 15 Feb 2014; published 24 Feb 2014 10 March 2014 | Vol. 22, No. 5 | DOI:10.1364/OE.22.004985 | OPTICS EXPRESS 4994