Digitized holography: modern holography for 3D ... - OSA Publishing

2 downloads 0 Views 2MB Size Report
“Computational holography: Real 3D by fast wave-field ren- dering in ultra-high resolution,” in Proceedings of SIGGRAPH. Posters' 2010 (2010). 8.
Digitized holography: modern holography for 3D imaging of virtual and real objects Kyoji Matsushima,1,* Yasuaki Arima,1 and Sumio Nakahara2 1

Department of Electrical and Electronic Engineering, Kansai University 3-3-35 Yamate-cho, Suita, Osaka 564-8680, Japan 2

Department of Mechanical Engineering, Kansai University 3-3-35 Yamate-cho, Suita, Osaka 564-8680, Japan *Corresponding author: matsu@kansai‐u.ac.jp Received 5 August 2011; accepted 24 October 2011; posted 4 November 2011 (Doc. ID 152016); published 5 December 2011

Recent developments in computer algorithms, image sensors, and microfabrication technologies make it possible to digitize the whole process of classical holography. This technique, referred to as digitized holography, allows us to create fine spatial three-dimensional (3D) images composed of virtual and real objects. In the technique, the wave field of real objects is captured in a wide area and at very high resolution using the technique of synthetic aperture digital holography. The captured field is incorporated in virtual 3D scenes including two-dimensional digital images and 3D polygon mesh objects. The synthetic field is optically reconstructed using the technique of computer-generated holograms. The reconstructed 3D images present all depth cues like classical holograms but are digitally editable, archivable, and transmittable unlike classical holograms. The synthetic hologram printed by a laser lithography system has a wide viewing zone in full-parallax and give viewers a strong sensation of depth, which has never been achieved by conventional 3D systems. A real hologram as well as the details of the technique is presented to verify the proposed technique. © 2011 Optical Society of America OCIS codes: 090.1760, 090.1995, 090.2870, 110.6880.

1. Introduction

In classical holography, the wave field emitted from a real object is recorded on light-sensitive films in the form of a fringe pattern generated by optical interference with a reference wave. The wave field of the object is optically reconstructed by diffraction with the fringe pattern after chemical processing of the film. This is the three-dimensional (3D) spatial image produced by classical holography. Therefore, we need a real object to create a 3D image in classical holography. Classical holography makes it possible to reconstruct brilliant 3D images that provide almost all depth cues, because holograms reconstruct the light of the recorded 3D scene itself. However, these holograms cannot be stored digitally and transmitted through digital networks. It is also almost impossible to edit the 3D scene after recording the interference 0003-6935/11/34H278-07$15.00/0 © 2011 Optical Society of America H278

APPLIED OPTICS / Vol. 50, No. 34 / 1 December 2011

fringe. These features are useful for some specific purposes such as security but are inconvenient in 3D imaging. Two types of techniques are continuously being developed to advance classical holography. One technique commonly referred to as digital holography (DH) [1] captures the interference fringe pattern using digital image sensors. Images are numerically reconstructed by digital processing of captured fields in DH. However, these are not 3D images but twodimensional (2D) digital images displayed on a screen or printed on a piece of paper. Since the captured data contains the information of the phase of light, this technique is mainly used for microscopes or some fields of metrology such as flow measurements. The other technique is referred to as the generation of computer-generated hologram (CGH) [2]. This technique numerically generates a fringe pattern using a digital computer and reconstructs wave fields of light using diffraction with the fringe pattern

printed or displayed. In principle, this technique is capable of producing any light if one knows what light should be produced. In the case of CGHs, a real object is no longer required but all depth cues are reconstructed similarly to classical holograms. This is an ideal feature for modern 3D technology. Thus, 3D imaging with CGHs is sometimes referred to as final 3D technology. However, for a long time, it was not possible to create fine CGHs for virtual 3D scenes such as those in modern computer graphics (CG). Instead of being used for 3D imaging, CGHs were treated as optical components such as optical filters. This was mainly due to the gigantic display resolution necessary to create 3D images using CGHs. Once the wave field is provided in the form of numerical data, CGHs can reconstruct the wave field. However, it is extremely difficult, even using modern computers, to compute the high-definition wave field emitted from virtual 3D scenes. Printing or displaying 3D images using CGHs is also very difficult because of the extreme high-definition necessary for reconstructing fine 3D images. However, recent development of polygon-based computer algorithms [3] allows us to calculate the high-definition wave field of a completely virtual 3D scene whose shape and properties are given by a numerical model. We have reported fully synthetic full-parallax computer holograms [4–9] that were printed with laser lithography equipment developed to fabricate photomasks and available in the market [4]. These synthetic holograms are composed of more than a billion pixels and reconstruct brilliant 3D images of occluded virtual 3D scenes. The reconstructed 3D images are not motion pictures but stills, at least for now. However, the quality of the 3D images is comparable to that in classical holography. The reconstructed spatial 3D images are quite different from those reconstructed by currently available 3D systems; the 3D images give viewers a strong sensation of depth that has never been provided by conventional 3D systems, which provide only binocular disparity. In this paper, we refer to the technique for creating CGHs as computer holography and refer to the created hologram as a computer hologram.

Field synthesis

Field capturing

Digital editing

Fig. 1. (Color online) Schematic illustration of computer holography.

Figure 1 shows schematically the concept of computer holography. Conventional CGHs or computer holograms reported so far mainly reconstruct virtual 3D scenes or objects. To reconstruct real objects through computer holography, three approaches can be adopted at this time. The easiest approach is to measure the shape of 3D objects using a laser rangefinder or 3D scanner and texture-map the photograph of the object onto the obtained polygon mesh [9,10]. However, the 3D image obtained taking this approach may be regarded as a type of synthetic image rather than a real image. The second approach is to use some technique based on multiple viewpoint projection [11]. This may be the most promising approach, but no high-quality hologram comparable to classical holograms has been reported for this technique as far as we know. The third approach is to capture real wave fields using DH. This has been attempted in order to reconstruct real objects through electro-holography [12,13]. However, electro-holography currently cannot reconstruct a fine 3D image in the first place. The reconstructed images are not comparable to classical holograms and the high-definition computer holograms. It is theoretically possible to capture any object field using DH, but it is not easy in practice to capture real fields that meet the following two requirements for creating the high-definition computer holograms mentioned above. The first requirement is that the sampling interval of the captured field is sufficiently small to provide a large viewing zone in the optical reconstruction. The interval should not exceed one micron. The second requirement is a large capturing area comparable in size to the created computer holograms. This also leads to a larger viewing zone and the requirement to reconstruct objects as large as the hologram itself, the length of which, for example, is approximately 10 cm. Both requirements lead to capturing the field on a large number of pixels. Unfortunately, current image sensors available in the market do not meet these requirements. To resolve the problem, we use lensless-Fourier synthetic aperture digital holography [14,15] (LFSA-DH), that is a type of DH using spherical reference waves. LFSA-DH resolves the problems of capturing real fields relating to image sensors and makes it possible to reconstruct 3D images of a real object through computer holography. This means that the whole process of classical holography is replaced by digital counterparts. Thus, we refer to this technique as digitized holography as in Fig. 1. Digitized holography allows us to digitally edit, archive, transmit, and optically reconstruct the wave field of real existing 3D objects. In addition, the real wave field can be mixed with virtual 3D scenes composed of digital 2D images and polygon mesh 3D objects. The detail of the technique is presented in this paper and an actual high-definition computer hologram is demonstrated to verify the reconstruction of a mixed 3D scene including real and virtual objects. 1 December 2011 / Vol. 50, No. 34 / APPLIED OPTICS

H279

2. Capturing Large-Scale Wave Fields

Capturing wave fields of real existing objects using an image sensor is simply the digital counterpart of recording in classical holography. However, impressive synthetic holograms commonly need display resolution of at least a billion pixels and physical resolution of less than one micron [4]. The wave fields captured by conventional DH do not meet these requirements, because there are no more than tens of millions of pixels even in state-of-the-art sensors and the resolution does not reach one micron. To resolve the problem, we use LFSA-DH [15]. 2.A.

Principle for Reducing Sampling Intervals

In LFSA-DH, the wave field of an object is obtained by the Fourier transformation of the field captured by the image sensor using a phase-shifting technique [16], as shown in Fig. 2. Here, the sampling intervals of the Fourier-transformed field in the image plane are [15] Δx 

λdR λdR ;Δ  ; N x δx y N y δy

(1)

where λ and dR are the wavelength and the distance between the center of the spherical reference wave and the image sensor, respectively. The number of sensor pixels and sensor pitches are N x × N y and δx × δy , respectively. Note that the sampling intervals of the Fourier-transformed field are not the same as those of the image sensor according to Eq. (1). The sampling intervals directly depend on the distance dR and the sampling numbers N x and N y . This means that the sampling intervals can be controlled by these parameters so as to fit the high-definition computer holography. 2.B.

Principle for Increasing the Sampling Cross Section

Since the sampling intervals decrease as the numbers of sensor pixels N x and N y increase, the synthetic aperture technique is used to increase the effective number of sensor pixels [14,15]. Here, the lensless-Fourier setup using a spherical reference wave has the advantage that the spatial frequency of the fringe is not increased at the edge of the sensor plane unlike the case for plane reference waves. In the synthetic aperture DH, the image sensor is mechanically translated and captures the wave field

Object

Fourier transform Reference point source

at different positions. Here, the sensor shift is set to be smaller than the sensor area in order to overlap the captured field at each position, as shown in Fig. 3. This overlap area is used to avoid translation errors; i.e. sensor positions are measured exactly using a correlation function for the captured fields [14]. As a result, all captured fields can be integrated into a single large-scale wave field using this technique. 2.C Experiment for Capturing the Large-Scale Wave Field through LFSA-DH

The experimental setup for capturing large-scale wave fields through LFSA-DH is shown in Fig. 4. The image sensor with 3000 × 2200 pixels (Lumenera Lw625) is mechanically translated by a computer-controlled motor stage. The fringe pattern is captured three times for each position to obtain a complex wave field [17] using the phase-shift provided by the mirror M3 installed in a piezo phase-shifter. Amplitude images of the captured and Fouriertransformed fields are shown in Fig. 5(a) and 5(b), respectively. The total field is obtained by stitching individual fields captured at 8 × 12 positions. The total cross section of the captured field is 77 × 80 mm2 . The parameters used for capturing are summarized in Table 1. Here, dR is the distance between SF3 generating the reference spherical wave and the sensor plane as in Fig. 4. The distance is set to 21.5 cm in this experiment. This is a free parameter and thus is determined using Eq. (1) so that the sampling intervals of the field are exactly 1.0 μm × 1.0 μm after Fourier transformation. 3. Editing a 3D Scene in Digitized Holography

Captured large-scale fields are incorporated into a 3D scene that includes virtual objects such as polygon mesh 3D objects and digital 2D images. These elements comprising the 3D scene are referred to as components in this section. 3.A.

Configuration of a 3D Scene

The coordinate system used to design 3D scenes is shown in Fig. 6. The center of the hologram is positioned at the origin of the global coordinates X; Y; Z. All of the real and virtual objects composing Fringe

Captured field

y x

Image plane

∆x× ∆y Fourier-transformed field

dR

Ny

Sensor plane

δx × δy

Areas of image sensor

Nx

Fig. 2. (Color online) Coordinate system and geometry of lensless-Fourier digital holography. H280

Overlap area

APPLIED OPTICS / Vol. 50, No. 34 / 1 December 2011

Fig. 3. (Color online) Capture of large-scale wave fields using the synthetic aperture technique.

BS1

λ = 532 nm 1/2RP1

Component 0

M1

Component n

1/2RP2 SF1

Component n+1 Z0

ys

Object BS3

xs

(Xn+1, Yn+1, Zn+1) Y

(Xn, Yn, Zn) SF2 M2

SF3 BS2

X

Image sensor with motorstage M3 with Piezo shifter

Fig. 4. (Color online) Experimental setup for capturing a large wave field by synthetic aperture DH. M: mirror, BS: beam splitter, RP: retarder plate, SF: spatial filter.

Z

Hologram Fig. 6. (Color online) The coordinate system and geometry used to design the 3D scene and compute the whole wave field of the scene.

the total number of components. Note that ZN ≡ 0 by the definition of the global coordinates and the position of the nearest component is given by ZN−1. This sequential calculation is necessary when employing the silhouette method [18,19,4] to shield the light behind each object and prevent individual objects from being see-through images. 3.B. Principle of Light Shielding Employing the Silhouette Method

Fig. 5. Amplitude images of the captured (a) and Fouriertransformed fields (b).

the 3D scene are given by their wave field, i.e., the distribution of complex amplitudes sampled in a plane parallel to the hologram. This is true even in cases of 3D objects. In this case, the object fields are computed from the CG model and then incorporated into the 3D scene in a given plane. The wave field of the component n has its own local coordinates xn ; yn . The origin of the local coordinates, denoted X n ; Y n ; Zn  in the global coordinates, defines the position of the components in the 3D scene and is determined by the designer of the scene. Computation of the whole wave field of the 3D scene begins with the farthest components from the hologram, whose field is uX; Y; Z0 , and ends in the hologram plane; the whole field of the scene is given by uX; Y; ZN , where Z0 and ZN are the Z-position of the farthest component and the hologram, respectively. N is Table 1.

Parameters Used to Capture the Large-Scale Wave Field

Number of sensor pixels Sensor pitches Number of captures Wavelength Distance of reference point source (dR ) Total number of samplings of Fourier-transformed field Sampling intervals of Fourier-transformed field

3; 000 × 2; 000 3.5 μm × 3.5 μm 8 × 12 532 nm 21.5 cm 32; 768 × 32; 768 1.0 μm × 1.0 μm

The light behind a real existing object must be shielded to correctly reconstruct the occluded scene. The silhouette method proposed for light shielding in fully synthetic holography is applied to the real object. The principle of light shielding for captured fields is shown in Fig. 7. Since the incident field behind the captured object should be shielded over the cross section of the object, the incident field is multiplied by a binary mask M n xn ; yn  that corresponds to the silhouette of the object. The captured field on xn ; yn  is then added to the masked background field. This process is exactly the same as the case of the synthetic field for virtual 2D and 3D objects [18,19]. This sequential light shielding is written as a recurrence formula:

Background field un(X, Y, Zn)

Captured field on(xn, yn) Object

Z Center of captured field (Xn, Yn, Zn) Mn(xn, yn) Silhouette mask Fig. 7. (Color online) The principle of the silhouette method for real captured fields. The background field is propagated to the position Zn , where the captured object is arranged as the designer intended. 1 December 2011 / Vol. 50, No. 34 / APPLIED OPTICS

H281

Table 2.

Fig. 8. Extraction of a silhouette mask from the captured field. The amplitude image (b) is obtained from a small part of the captured field (a). The silhouette mask (c) obtained from the amplitude image (b).

uX; Y; Zn1   P Zn1 −Zn fuX; Y; Zn M n X − X n ; Y − Y n   on X − X n ; Y − Y n g;

2

where the symbol P d f·g represents field propagation for the distance d. Note that the angular spectrum [20] or band-limited angular spectrum method [21] is used for the numerical propagation if the memory installed in the computer is sufficient to store the whole field; otherwise, segmented propagation [4] employing the off-axis propagation method [22,5] is required. 3.C. Extraction of a Silhouette Mask from the Captured Wave Field

It is expected that the silhouette mask M n xn ; yn  of real objects can be extracted from the captured field on xn ; yn  because the field retains the shape information of the object. However, in the amplitude image yielded from the captured field, the edge of the object is blurred by heavy defocusing, as shown in Fig. 5(b). This phenomenon is similar to blurring in macro photography, for which the depth of field is commonly small. Thus, an aperture should be used to capture the field so that the numerically reconstructed amplitude image is clear. In digital holography, however, this can be simply achieved by clipping a small part of the captured field after capturing. Figure 8(b) shows the amplitude image yielded by Fourier transformation of a small part of the captured field in (a). It is verified that the blurring disappears and the image is clear. The silhouette mask

Summary of parameters used to create Bear II

Total number of pixels Pixel pitches Dimension of hologram Number of segments Hologram coding Reconstruction wavelength Dimension of wallpaper (W × H) Center position of far bear X 1 ; Y 1 ; Z1  Center position of near bear X 2 ; Y 2 ; Z2  Depth of bees’ plane Z3 

4.3 × 109 65; 536 × 65; 536 1 μm × 1 μm 65.5 × 65.5 mm2 2×2 Binary amplitude 632.8 nm 65.5 × 65.5 mm2 15; −5; −200 mm 0; −5; −150 mm −100 mm mm

in (c) is obtained by binarizing and reversing the amplitude image in (b). 4. Computer Hologram of a Mixed 3D Scene Including Virtual and Real Objects

A computer hologram named “Bear II” is created using the captured field presented in section 2.C. 4.A.

Mixed 3D Scene

A real existing object, a toy bear whose wave field is captured through LFSA-DH, is mixed with a virtual 3D scene. The design of the scene is shown in Fig. 9. Here, the bear appears twice in the scene, i.e., the same captured wave field is used twice in the scene. Virtual objects such as 2D wallpaper or 3D bees are arranged behind or in front of the two bears. The occluded relation is correctly reconstructed between the bears as well as between the objects behind and in front of the bears as if real objects are placed at the positions. This kind of editing of 3D scenes is impossible in classical holography. Only digitized holography allows us to edit the 3D scene. 4.B.

Fabrication and Reconstruction of “Bear II”

After calculation of the whole wave field of the mixed 3D scene, the fringe pattern is generated by numerical interference with a reference wave and then quantized to produce the binary pattern. Finally, the binary amplitude hologram is fabricated using

2D digital image Captured fields of real-existent object CG-modeled 3D objects 50

65

50 50

Units: mm

65 100

Hologram Fig. 9. (Color online) Mixed 3D scene of “Bear II”. The scene includes the fields of the real object and the CG-modeled virtual objects. H282

APPLIED OPTICS / Vol. 50, No. 34 / 1 December 2011

Fig. 10. (Color online) Photograph of the optical reconstruction of Bear II using reflected illumination of an ordinary red LED (Media 1).

Fig. 11. (Color online) Photographs of the optical reconstruction of Bear II using transmitted illumination of a He-Ne laser (Media 2). Photographs (a)–(c) are taken from different viewpoints.

Fig. 12. (Color online) Occlusion errors occurring in the case of off-axis viewpoints.

in-line viewpoint appear around the object are visible from an off-axis viewpoint, as shown in Fig. 12. This is most likely due to disagreement between the planes in that the real wave field is given and the object has the maximum cross section. As shown in Fig. 13, viewers see the silhouette mask itself in this case; the background light cannot be seen, even though not hidden by the object. In this case, however, we can easily resolve the problem by numerically propagating the field a short distance so that the field plane is exactly placed at the maximum cross section of the object. Unfortunately, silhouette-masking does not work well in some cases where the object has severe self-occlusion or the silhouette shape of the object does not fit with the cross section. 6. Conclusion

Fig. 13. (Color online) Origin of the occlusion error in the cases where the field plane is not placed at the maximum cross section of the object.

a laser lithography system. There are approximately four billion pixels for Bear II. Since the pixel pitches are 1.0 μm × 1.0 μm, the viewing angle is 37° both in the horizontal and vertical. The parameters used to create Bear II are summarized in Table 2. Photographs and videos of the optical reconstruction of Bear II are shown in Fig. 10 and Fig. 11. It is verified that occlusion of the 3D scene is accurately reconstructed, with the appearance of the 3D scene varying as the point of view changes. 5. Discussion

Occluded scenes are reconstructed by a silhouettemasking technique that shields the field behind the object. However, silhouette-masking is not a universally applicable technique for light shielding. For example, black shadows that are not seen from the

We proposed a technique called digitized holography. Using this technique, the wave field of a real object is captured by a personal computer using the technique of lensless-Fourier synthetic aperture digital holography. The captured field is incorporated in a virtual 3D scene and optically reconstructed by computer holography. This means that the whole process of classical holography is replaced with modern digital processing of the wave field. As a result, the 3D images reconstructed by holography can be edited, stored, and transmitted by digital technology unlike the case for classical holography. The reconstructed 3D image is a spatial image like its counterpart in classical holography and thus conveys a strong depth impression to viewers. The authors thank Mr. Nishi for his assistance in designing the 3D scene of Bear II. This work was supported by JSPS. KAKENHI (21500114) and the Kansai University Research Grants: Grant-in-Aid for Encouragement of Scientists, 2011–2012. References 1. J. W. Goodman, and R. W. Lawrence, “Digital image formation from electronically detected holograms,” Appl. Phys. Lett. 11, 77–79 (1967). 2. A. W. Lohmann, and D. P. Paris, “Binary fraunhofer holograms, generated by computer,” Appl. Opt. 6, 1739–1748 (1967). 3. K. Matsushima, “Computer-generated holograms for threedimensional surface objects with shade and texture,” Appl. Opt. 44, 4607–4614 (2005). 1 December 2011 / Vol. 50, No. 34 / APPLIED OPTICS

H283

4. K. Matsushima, and S. Nakahara, “Extremely highdefinition full-parallax computer-generated hologram created by the polygon-based method,” Appl. Opt. 48, H54–H63 (2009). 5. K. Matsushima, and S. Nakahara, “High-definition fullparallax CGHs created by using the polygon-based method and the shifted angular spectrum method,” Proc. SPIE 7619, 761913 (2010). 6. K. Matsushima, M. Nakamura, and S. Nakahara, “Novel techniques introduced into polygon-based high-definition CGHs,” in Topical Meeting on Digital Holography and ThreeDimensional Imaging (Optical Society of America, 2010), paper JMA10. 7. K. Matsusima, M. Nakamura, I. Kanaya, and S. Nakahara, “Computational holography: Real 3D by fast wave-field rendering in ultra-high resolution,” in Proceedings of SIGGRAPH Posters’ 2010 (2010). 8. K. Matsushima, “Wave-field rendering in computational holography,” in 2010 IEEE/ACIS 9th International Conference on Computer and Information Science (2010), pp. 846–851. 9. H. Nishi, K. Higashi, Y. Arima, K. Matsushima, and S. Nakahara, “New techniques for wave-field rendering of polygon-based high-definition CGHs,” Proc. SPIE 7957, 79571A (2011). 10. K. Matsushima, H. Nishi, and S. Nakahara are preparing a manuscript to be called “Simple wave-field rendering for photorealistic reconstruction in polygon-based high-definition computer holography,” 11. N. T. Shaked, B. Katz, and J. Rosen, “Review of threedimensional holographic imaging by multiple-viewpointprojection based methods,” Appl. Opt. 48, H120–H136 (2009).

H284

APPLIED OPTICS / Vol. 50, No. 34 / 1 December 2011

12. N. Hashimoto, K. Hoshino, and S. Morokawa, “Improved realtime holography system with LCDs,” Proc. SPIE 1667, 2–7 (1992). 13. K. Sato, “Record and display of color 3-D images by electronic holography,” in Topical Meeting on Digital Holography and Three-Dimensional Imaging (Optical Society of America, 2007), paper DWA2. 14. R. Binet, J. Colineau, and J.-C. Lehureau, “Short-range synthetic aperture imaging at 633 nm by digital holography,” Appl. Opt. 41, 4775–4782 (2002). 15. T. Nakatsuji, and K. Matsushima, “Free-viewpoint images captured using phase-shifting synthetic aperture digital holography,” Appl. Opt. 47, D136–D143 (2008). 16. I. Yamaguchi, and T. Zhang, “Phase-shifting digital holography,” Opt. Lett. 22, 1268–1270 (1997). 17. Y. Takaki, H. Kawai, and H. Ohzu, “Hybrid holographic microscopy free of conjugate and zero-order images,” Appl. Opt. 38, 4990–4996 (1999). 18. K. Matsushima, and A. Kondoh, “A wave optical algorithm for hidden-surface removal in digitally synthetic full-parallax holograms for three-dimensional objects,” Proc. SPIE 5290, 90–97 (2004). 19. A. Kondoh, and K. Matsushima, “Hidden surface removal in full-parallax CGHs by silhouette approximation,” Syst. Comput. Jpn. 38, 53–61 (2007). 20. J. W. Goodman, Introduction to Fourier Optics, 2nd ed. (McGraw-Hill, 1996), chap. 3.10. 21. K. Matsushima, and T. Shimobaba, “Band-limited angular spectrum method for numerical simulation of free-space propagation in far and near fields,” Opt. Express 17, 19662– 19673 (2009). 22. K. Matsushima, “Shifted angular spectrum method for off-axis numerical propagation,” Opt. Express 18, 18453–18463 (2010).