Three-dimensional camera phone - OSA Publishing

0 downloads 0 Views 2MB Size Report
Dec 1, 2004 - ization is easily manipulated by a cellophane sheet used as a half-waveplate. ... Humans have long been fascinated with finding ways.
Three-dimensional camera phone Keigo Iizuka

An inexpensive technique for realizing a three-dimensional 共3D兲 camera phone display is presented. Light from the liquid-crystal screen of a camera phone is linearly polarized, and its direction of polarization is easily manipulated by a cellophane sheet used as a half-waveplate. The novel 3D camera phone display is made possible solely by optical components without resorting to computation, so that the 3D image is displayed in real time. Quality of the original image is not sacrificed in the process of converting it into a 3D image. © 2004 Optical Society of America OCIS codes: 100.3010, 100.6890, 110.6880, 120.2040, 330.1400.

1. Introduction

Humans have long been fascinated with finding ways to produce three-dimensional 共3D兲 displays out of two-dimensional 共2D兲 pictures. Over the past 150 years, various schemes for 3D displays have been devised.1 The earliest schemes were based on stereoscopes, such as combining images by mirrors2,3 or combining images by prisms.1 In a more recent virtual-reality approach, images directed to the eyes via a head-mounted display change with the observer’s head movement.4 Some schemes require the viewer to wear glasses made of colored filters,1,5 polarizers,1,5 or liquid-crystal shutters.6,7 Other schemes designed to eliminate the need for viewing aids include parallax barriers,1,5 lenticular lenses,8 –12 integral photography,13–15 holography,16 –18 and varifocal mirrors.19,20 Interest in camera phones is ever growing, and 75 million camera phones were shipped worldwide in 2003. With this thrust, the electronics manufacturer Sharp recently launched fabrication of a 3D camera phone. The fabrication involves compounding an optical parallax barrier layer over the ordinary liquid-crystal display layer. With this display, the operator uses the camera phone to take two pictures in succession by shifting the camera horizontally approximately 6.5 cm between the exposures. A digi-

K. Iizuka 共[email protected]; http://www.keigo-iizuka. com兲 is with the Department of Electrical and Computer Engineering, 10 King’s College Road, University of Toronto, Toronto, Ontario M5S 3G4, Canada. Received 18 February 2004; revised manuscript received 14 June 2004; accepted 16 July 2004. 0003-6935兾04兾346285-08$15.00兾0 © 2004 Optical Society of America

tal signal processor converts these two images into a format suitable for a parallax barrier representation. Besides the barrier noise, the resolution is reduced to at best one half of the 2D display. The operating principle of our 3D camera phone is not based on digital signal processing. It operates solely by manipulation of optics, which has many advantages. Fabrication is simpler, so that the cost of conversion to 3D is extremely low. Real-time operation is feasible because there is no delay between taking the pictures and creating the 3D image output. Image quality is not deteriorated in the process of making it 3D. There are no cross-talk images, no parallax barrier noise, and no reduction in resolution. The observer need not position the display at a specific distance to see the 3D image. This technique takes advantage of the light from the liquid-crystal screen of a camera phone being linearly polarized and its direction of the polarization being easily manipulated by a cellophane halfwaveplate.21 2. Principle of Operation

Let us begin by reviewing some basic stereoscopic principles. Figure 1共a兲 shows what the left and right eyes see when a ball is flying toward the observer. The left eye sees the ball to its right, while the right eye sees the same object to its left. This occurrence is called parallax. The parallax assists our brain in judging the distance to the ball. Figure 1共b兲 is an attempt to fool the brain. Two pictures of the ball are drawn on the screen located behind the ball by extension of the lines of sight onto the screen. As far as the light paths between the center-crossing point and the eyes are concerned, they are the same as the light path that would have been created by the actual ball; however, to create the illusion that the ball ex1 December 2004 兾 Vol. 43, No. 34 兾 APPLIED OPTICS

6285

Fig. 1. Principle of stereoscopy. 共a兲 The observer sees the ball in front of his or her eyes. 共b兲 A picture of the ball is drawn on the screen by extension of the lines from the eyes to the ball. 共c兲 The observer sees two balls on the screen, and there is no stereoscopic effect as yet. To produce a stereoscopic effect, the views represented by the dashed lines must be eliminated, which we accomplish using the gating behavior of polarizers.

ists off the screen, we must find a way to ensure that each eye sees only one picture of the ball, as indicated by the solid-line traces in Fig. 1共c兲. In other words, if we eliminate the views that correspond to the dashed-line traces in Fig. 1共c兲, the light paths in Fig. 1共c兲 become identical to the light paths shown in Fig. 1共b兲, and the illusion that the ball exists outside the screen is created. One way to accomplish this is by use of polarized light. Manipulation of polarized light makes it possible for each eye to see only the correct view, which is the view corresponding to the solid-line traces. Polarized light is used not only because our eyes are insensitive to the state of polarization of the light but also because its path can be selectively blocked or transmitted by use of a polarizer. The light emanating from the liquid-crystal screen of a camera phone is already linearly polarized simply because the top surface of the liquid-crystal screen is covered by a polarizer sheet as one of the necessary parts of the liquid-crystal display. To examine the effectiveness of the gating function with a polarizer sheet, we hold a triangular polarizer sheet up to the phone’s liquid crystal display 共Fig. 2兲. In Fig. 2共a兲, the polarizer sheet is oriented with its transmission axis perpendicular to the polarization direction of the light from the screen. With this orientation, the polarizer sheet is quite effective at blocking the light from the screen 共but not quite 100%兲. In Fig. 2共b兲, the transmission axis of the polarizer sheet is rotated by 90° so that it becomes parallel to the polarization direction of light from the screen. With this orientation, the polarizer sheet becomes transparent 共but not quite 100%兲. Thus, the light path is easily gated 共albeit not perfectly兲 by means of the polarizer orientation. With the degree of obtainable attenuation and transmission shown in Fig. 2, no shortcomings due to the imperfect gating are noticeable in the final result of the 3D image. There are two ways to obtain 3D images from the 6286

APPLIED OPTICS 兾 Vol. 43, No. 34 兾 1 December 2004

phone’s liquid-crystal display. The first is to use a pair of camera phones, and the second is to use only one camera phone. The method of using a pair of camera phones is conceptually simpler, and this method is explained first. 3. Three-Dimensional Display with Two Camera Phones

The two camera phones 共we used two Panasonic GD 88 mobile cellular phones for this demonstration兲 are placed with a spacing of approximately 6.5 cm, which is the average spacing between human eyes. A pair of pictures taken with this spacing is a pair of stereoscopic images. Figure 3共a兲 shows the geometry for taking the stereoscopic image of a pencil, and Fig. 3共b兲 is a photograph showing the resulting stereoscopic pair of images. This stereoscopic pair is sent by the transmitter phones to a pair of dialed distant receiver phones so that the two transmitted images are reproduced at the receiver site. The receiver site has to do two things: 共i兲 transpose the images and 共ii兲 rotate the polarization direc-

Fig. 2. Demonstrating the gating behavior of polarizers. 共a兲 A triangular polarizer is oriented to block the camera phone’s display light. 共b兲 The orientation of the triangular polarizer is rotated by 90° to transmit the display light.

Fig. 3. Taking images of a stereoscopic pair by use of two transmitter camera phones: 共a兲 geometry and 共b兲 corresponding photograph.

tion of one of the images. To transpose the images, the image taken by the left transmitting camera phone is sent to the right receiver phone, and likewise, the image taken by the right transmitting camera phone is sent to the left receiver phone. Figure 4 shows the geometry and corresponding photograph for viewing the transposed images at the receiver

site. The operation of the transpose is photographically indicated by the crossed arms in Fig. 4共b兲. Compare the directions of the tip of the pencil in Fig. 3 and Fig. 4. Before the transpose, the pencil tips point toward each other, whereas after the transpose, the tips point away from each other. Realize that but for this transpose, the tip of the recon-

Fig. 4. Viewing the transposed stereoscopic image in the receiver phones through polarizer glasses: 共a兲 geometry and 共b兲 corresponding photograph. 1 December 2004 兾 Vol. 43, No. 34 兾 APPLIED OPTICS

6287

structed pencil image in Fig. 4 would point away from the observer, and the depth information would be reversed 共pseudoscopic image兲. A half-waveplate consisting of an ordinary 25-␮mthick cellophane sheet is used to rotate the polarization direction of one of the images. The display of the left camera phone in Fig. 4 is covered with the cellophane half-waveplate with its fast axis 共the direction perpendicular to that of the roll of the cellophane兲 in either the horizontal or the vertical direction. Light from the Panasonic GD 88 display is polarized at 135° from the horizontal direction, whereas that of the cellophane-covered receiver phone is rotated by ⫺90° to 45° from the horizontal direction. Thus, the stereoscopic pair of images is displayed with orthogonal polarizations on the two phones side by side. Images in orthogonal polarization 共135° on the right and 45° on the left兲 become separable when the observer wears a pair of glasses of orthogonal polarization. The polarizer glasses shown in Fig. 4共b兲 are glasses in which the covering for the eyes are constructed from polarizer sheets that have been cut down to a size suitable to fit into the frames of the glasses. The direction of the transmission axis of the polarizer covering the right eye is oriented at 45° and that covering the left eye is 135°. This means the right eye can see only the screen of the left phone, and similarly, the left eye can see only the screen of the right phone. This configuration eliminates the light paths shown in the dashed-lines in Fig. 1共c兲 and duplicates the desired crisscross light path shown in Fig. 1共b兲, thus generating the illusion to the observer that there exists an actual object off screen. 4. Eye Fatigue

To ease the problem of eye fatigue, we now consider a modification to the configuration. We begin by reviewing factors that contribute to eye fatigue. The eyeball itself rotates when viewing an object. The inside muscle 共medial rectus muscle兲 contracts, and the outside muscle 共lateral rectus muscle兲 expands to turn the eyeball inward. This muscle movement is called convergence or simply vergence to the position of object. At the same time, the focal length of the eye lens is adjusted by the tension of the ciliary body muscle to clearly focus the image on the retina. This adjustment is called accommodation of the eye to the position of the object. Our brain judges the distance to the object primarily from parallax, but convergence and accommodation are two other major factors. 共Certainly it is much more complex, and there are other factors such as occlusion, relative size, shadows, foreshortening, relative brightness, atmosphere and texture gradient, and movement parallax,5,22–26 to name a few.兲 It has been clinically shown5 that if the point of convergence is different from that of accommodation, eye fatigue accumulates. In Fig. 4 the points of convergence and accommodation are different. The point of convergence is to the midpoint between the camera phones and the eyes, whereas the point of accommodation is to a point on 6288

APPLIED OPTICS 兾 Vol. 43, No. 34 兾 1 December 2004

Fig. 5. Wedge prisms shift the image to the original location to relieve eye fatigue.

the camera phone display. This difference not only becomes a source of eye fatigue, but there are some people who are not able to make binocular fusion and cannot see the 3D effect even with a slight difference between the points of convergence and accommodation. This situation is analogous to color blindness in which some people cannot see certain colors. A pair of prisms was used to alleviate this problem as shown in Fig. 5. Prisms with a 5.6° wedge angle were used. One prism deviates the path of light approximately by 3°. The prisms shift the location of the image as shown by the dashed lines in the figure. That brings the image almost to the position where the object was originally located, without harming the three dimensionality of the image. This arrangement not only reduces eye fatigue but also lessens the problems associated with poor binocular fusion. The required angle of deviation of the prism depends on the viewing configuration. It certainly can be determined by calculation. An easier way is described as follows. When an observer looks through the pair of glasses with prisms at an ordinary object, the observer sees double images. The wedge angle of the prism is chosen such that the spacing of the double images becomes equal to the spacing between the pair of stereoscopic images. The cross-sectional shape of the pair of prisms resembles a single plano-convex lens with its center portion removed. This suggests an easy method for

Fig. 6. Eliminating the need to wear glasses.

Fig. 7. Geometry for taking a stereoscopic pair of pictures by means of slip-on stereoscopic mirrors.

combating poor binocular fusion. By just looking through an ordinary magnifying glass, the problem of poor binocular fusion may be solved. An added bonus of the magnifying glass is that it enlarges the image 共see Appendix A兲.

magnifying lens. The box is made telescopic so that the depth of the box is adjustable. As shown in Fig. 6, a polarizer sheet is placed in the opening with its transmission axis at 135° from the horizontal direction. Covering the right half of the polarizer sheet with a cellophane sheet changes the direction of the transmission axis to 45°. Even though a glass lens was used in Fig. 6, a plastic lens has the advantage of lighter weight. If a plastic lens is used, it should not be placed between any polarizer sheets because it may create birefringent fringe patterns due to residual stress in the plastic lens. The plastic lens should be placed on the outside surface of the box rather than on the inside surface. It should be added that this approach is possible because the phone normally has only one viewer at a time, and the viewer instinctively holds his or her head in a position to match the position of the display.

5. Eliminating the Need to Wear Polarizer Glasses

The necessity for the observer to wear polarizer glasses can be eliminated by letting the camera phones wear the glasses instead. Instead of a pair of prisms a plano-convex lens was used to fuse the stereoscopic images. As illustrated in Appendix A 共see Fig. 10 below兲, try to look at your finger stretched in front of you through diametrically opposite edges of an ordinary magnifying lens. A single image of your finger starts to split into a double image as you stretch your arm further. When the spacing of such a double image becomes equal to the spacing between the pair of stereoscopic images of the camera phone, that is the required stretch of your arm. Note that for a given stretch of the arm, an increase in the distance between the eyes and the lens also results in an increase in the spacing of the double image. But as the position of the eyes moves away from the lens, the range of the allowed lateral movement of the eyes decreases, and finally, the tolerance becomes uncomfortably small. Encasing the two camera phones in a small box with an opening on its cover allows practical use of a

6. Three-Dimensional Display with a Single Camera Phone

In this section, we describe a method for achieving the 3D effect using only one camera phone. The camera phone display has to share two stereoscopic images, and the left and right images have to be transposed. We constructed slip-on stereoscopic mirrors as shown in Fig. 7. The outer pair of mirrors are ordinary mirrors 共nearly 100% reflection兲, and the inner pair of mirrors are half mirrors 共50% trans1 December 2004 兾 Vol. 43, No. 34 兾 APPLIED OPTICS

6289

Fig. 8. Taking a stereoscopic pair of pictures by means of slip-on stereoscopic mirrors.

Fig. 9. Images taken by the camera phone were transmitted to a laptop computer to compare the picture quality 共a兲 taken in an ordinary manner and 共b兲 taken with the slip-on stereoscopic mirrors.

mission and 50% reflection兲. The spacing between the outer mirrors is set at the average human eye spacing of 6.5 cm. The scene taken in by the right outer mirror reaches the left side of the stereoscopic image by way of the inner two half-mirrors, and similarly, that taken in by the left outer mirror reaches the right side of the stereoscopic image. Thus, the transpose of the images is achieved. A black velvet light fence is placed in front of the inner half-mirrors. The black velvet not only blocks the light directly coming from the scene to the camera phone but also absorbs unwanted stray light. The inner halfmirrors form a rectangle mirror that reflects the light toward its source. As a result, a virtual image of a shiny structure in the vicinity of the lens aperture of the camera phone is formed in front of the camera phone. The harmful effect of the virtual image, however, can be minimized by placement of the camera phone as close to the half-mirrors as possible so that the distance to the virtual image is too short to be focused by the lens. Moreover, the intensity of the virtual image is attenuated to one quarter of that of the original image. The slip-on stereoscopic mirrors have a wide strap to slip the camera phone securely into place as shown in the photograph in Fig. 8. When the camera phone is slipped into the strap, the strap covers the shiny part of the camera phone as much as possible except for the lens opening. One takes a pair of stereoscopic images by pressing the shutter key of the camera phone. The stereoscopic pair is then sent to a dialed distant receiver phone. On the receiver end, only the left half of the receiver phone’s liquid-crystal display is covered by the cellophane sheet so as to rotate the direction of the polarization of the light from 135° to 45°. Thus, a stereoscopic pair of images with orthogonal polarization of 45° on the left half and 135° on the right half is made on the receiver camera phone display. The observer at the receiver end can view the 3D image by wearing either crossed polarizer glasses such as those shown in Fig. 4 or clip-on polarizer glasses such as those shown in Fig. 8. One may do

away with the prisms because of the small spacing between the pair of the stereoscopic images. One may also do away with wearing the glasses by devising a box such as shown in Fig. 6. An interesting feature of this camera is that at the time the 3D picture is taken, the composition of the 3D image can be readily examined on a preview screen even before the shutter is activated. It behaves as a real-time 3D display. This mode of operation is possible because the cellophane sheet remains on the left half of the display screen of the transmitter camera phone at all times. The operator wears clip-on polarizer glasses during the previewing, which can be flipped up for ordinary viewing. Thus, it is possible to examine the composition of an image from the viewpoint of the effectiveness of the 3D image even before capturing the image. This preview advantage arises from our display using solely optics rather than two-step digital signal processing to obtain the stereoscopic pair of images. As a matter of fact, we intend to use the principle of such slip-on stereoscopic mirrors to convert a conference video phone of the Canadian Hearing Society into a 3D display. Sign language is inherently a 3D form of communication, and as such, the nuances of sign language often suffer when rendered in two dimensions. High-quality, real-time 3D displays are an exciting prospect for those who rely on sign language for communication. For comparison of the quality of the transmitted 3D pictures, images taken by the camera phone were first transmitted to the receiver phone and then transmitted to a laptop computer. The result is compared with that taken in an ordinary manner in Fig. 9. The slip-on stereoscopic mirrors inflicted no noticeable degradation in picture quality. The existence of the virtual image of the aperture of the camera is not recognizable. Aside from the visual field size of the image being one half of that taken by the two camera phones, the quality of the reconstructed 3D image is excellent. The incident light energy is

6290

APPLIED OPTICS 兾 Vol. 43, No. 34 兾 1 December 2004

reduced to one quarter owing to the transmission and the reflection from the half-mirrors in the slip-on stereoscopic mirrors, but since the size of the image is reduced to half, the resultant reduction of the incident light intensity is one half. The automatic exposure mechanism of the camera can easily cope with the reduced intensity. 7. Conclusion

In summary, we used a half-waveplate made of cellophane to convert an ordinary camera phone into a 3D camera phone. The conversion into 3D was demonstrated in two modes by use of dual camera phones and by a single camera phone. We designed and tested a stereoscopic pair of mirrors that can slip onto the camera phone. As a means of coping with problems of both poor binocular fusion and eye fatigue, a set of prisms was introduced to fuse the stereoscopic pair of images. As an alternative to prisms, a magnifying lens whose diameter is large enough to allow each eye to see though diagonally opposite edges of the lens proved to play the same function as the prisms. By letting the stereoscopic pair of camera phones wear the polarizer, the need for the observer to wear the polarizers was eliminated. The optical components used are all very inexpensive, and moreover, the quality of the images does not deteriorate in the process of making it 3D. The 3D conversion does not depend on digital signal processing and is based entirely on optics, making real-time operation possible. Thus, a real-time color 3D image free from the problems of binocular fusion was obtained with plain ordinary camera phones. Real-time operation is a distinct advantage in such applications as 3D conference video phones; correspondence between a doctor and patient; 3D displays of road maps for route guidance services for pedestrians27,28; 3D map car navigation systems; remote rescue and inspection; prompt recording means for architects, real estate agents, insurance claims adjusters, and online journalists; and even simply for 3D interactive video games. Appendix A

The prisms in Fig. 5 played the role of fusing the two images. The following experiment demonstrates that a plain, ordinary magnifying lens can play the same role if its diameter is large enough and the eyes are oriented such that right eye sees predominantly through the right edge of the lens and the left eye sees predominantly through the left edge of the lens as shown in Fig. 10. After all, the cross-sectional shape of the pair of prisms resembles a single plano-convex lens with the center portion removed. With one hand, hold a magnifying glass to your eyes as shown in Fig. 10. With your other hand, make a V sign with your fingers 共the V sign serves as the object to be viewed兲, and extend your arm. When only the right eye is open, the magnified image of the V sign moves to the right from its original center position, and when only the left eye is open,

Fig. 10. The V sign image shifts when viewed with the eyes aligned near the edges of a magnifying glass. The shift depends on the stretch of the arm. By adjusting the stretch of the arm, a three-finger V sign can be observed.

the image moves to the left of center. When both eyes are open, two sets of V signs are observed. The separation of the two images is increased with the stretch of your arm. You can adjust the distance to your fingers from the lens such that you see three fingers. The image of the finger in the center is your pointer finger fused with the middle finger. Thus, the magnifying lens fuses two images if viewed through the diagonally opposite edges. We express sincere gratitude to Luigi Vedani, Ing, CISA 共Certified Information Systems Auditor兲 for providing valuable information about binocular fusion; Beverly Battram and Mario Andaya of Panasonic Canada, Inc. for their support; Mary Jean Giliberto for proofreading and assisting with preparation of the manuscript; and Megumi Iizuka for discussions on the mechanism of human vision. References 1. T. Okoshi, Three-Dimensional Imaging Engineering, 2nd ed. 共in Japanese兲 共Asakura, Tokyo, 1997兲. 关An updated version of the English translation Three-Dimensional Imaging Techniques 共Academic, New York, 1976兲.兴 2. J. A. Norling, “The stereoscopic art-a reprint,” J. SMPTE 60, 286 –308 共1953兲. 3. A. W. Judge, Stereoscopic Photography: Its Application to Science, Industry and Education 共Chapman and Hall, London, 1950兲. 4. R. E. Fisher, “Optics for head-mounted displays,” Inf. Displ. 10, 12–18 共1994兲. 5. T. Izumi, ed., Fundamentals of 3D Vision 共Ohm-sha, Tokyo, 1995兲. 6. H. Isono and M. Yasuda, “Flicker-free field sequential stereo1 December 2004 兾 Vol. 43, No. 34 兾 APPLIED OPTICS

6291

7.

8. 9. 10. 11. 12.

13. 14. 15.

16. 17. 18.

19.

scopic TV system and measurement of human depth perception,” J. SMPTE 99, 138 –141 共1990兲. L. Lipton, “Selection devices for field-sequential stereoscopic displays: a brief history,” in Stereoscopic Displays and Applications II, J. O. Merritt and S. S. Fisher, eds., Proc. SPIE 1457, 274 –282 共1991兲. H. E. Ives, “Parallax panoramagrams for viewing by reflected light,” J. Opt. Soc. Am. 20, 585–592 共1930兲. H. E. Ives, “Optical properties of a Lippmann lenticulated sheet,” J. Opt. Soc. Am. 21, 171–176 共1931兲. H. E. Ives, “The projection of parallax panoramagrams,” J. Opt. Soc. Am. 21, 397– 409 共1931兲. T. Okoshi, “Three-dimensional displays,” Proc. IEEE 68, 548 – 564 共1980兲. H. Higuchi and J. Hamasaki, “Real-time transmission of 3-D images formed by parallax panoramagrams,” Appl. Opt. 17, 3895–3902 共1978兲. M. G. Lippmann, “Epreuves reversiles donnant la sensation du relief,” J. de Phys. Ser. 4 7, 821– 825 共1908兲. R. J. Collier, “Holography and integral photography,” Physics Today, July 1968, 55– 63. F. Okano, H. Hoshino, J. Arai, M. Yamada, and I. Yuyama, “Integral three-dimensional video system,” in Three Dimensional Video and Display: Devices and Systems, B. Javidi and F. Okano, eds., Vol. CR76 of SPIE Critical Review Series 共SPIE Press, Bellingham, Wash., 2001兲, pp. 90 –116. D. A. Gabor, “A new microscopic principle,” Nature 161, 777– 778 共1948兲. E. N. Leith and J. Upatnieks, “Wavefront reconstruction with continuous-tone objects,” J. Opt. Soc. Am. 53, 1377–1381 共1963兲. T. Okoshi and K. Oshima, “Three-dimensional imaging from a unidirectional hologram: wide-viewing-zone projection type,” Appl. Opt. 15, 1023–1029 共1976兲. A. C. Traub, “Stereoscopic display using rapid varifocal mirror oscillations,” Appl. Opt. 6, 1085–1087 共1967兲.

6292

APPLIED OPTICS 兾 Vol. 43, No. 34 兾 1 December 2004

20. S. McKay, S. Mason, L. S. Mair, P. Waddell, and S. M. Fraser, “Stereoscopic display using a 1.2-m diameter stretchable membrane mirror,” in Stereoscopic Displays and Virtual Reality Systems VI, J. O. Merritt, M. T. Bolas, and S. S. Fisher, eds., Proc. SPIE 3639, 122–131 共1999兲. 21. K. Iizuka, “Cellophane as a half-waveplate and its use for converting a laptop computer screen into a 3D display,” Rev. Sci. Instrum. 74, 3636 –3639 共2003兲. 22. J. Harrold, A. Jacobs, G. J. Woodgate, and D. Ezra, “Performance of a convertible, 2D and 3D parallax barrier autostereoscopic display,” in Proceedings of the Society for Information Display Twentieth International Display Research Conference, J. Morreale, ed. 共Society of Information Display, San Jose, Calif., 2000兲, pp. 280 –283. 23. J. Cutting and P. Vishton, “Perceiving layout and knowing distance: the integration, relative potency and contextual use of different information about depth,” in Perception of Space and Motion, W. Epstein and S. Rogers, eds. 共Academic, New York, 1995兲, pp. 69 –118. 24. E. Goldstein, Sensation and Perception, 3rd ed. 共Wadsworth, Belmont, Calif. 1989兲. 25. H. Sedgwick, “The geometry of spatial layout in pictorial representation,” in The Perception of Pictures, Vol. 1, Alberti’s Window: The Projective Model of Pictorial Information, M. Hagen, ed. 共Academic, London, 1980兲, pp. 34 –90. 26. J. D. Pfautz, “Depth perception in computer graphics,” Technical Report 546, 共University of Cambridge Computer Laboratory, Cambridge, UK, 2002兲. 27. H. Sugiyama, “Route guidance services for pedestrians,” J. Inst. Electron. Commun. Eng. Jpn. 86, 358 –363 共2003兲. 28. H. Sugiyama, “A pedestrian navigation system based on a navigation demand model,” presented at the Eighth World Congress on Intelligent Transport Systems, Sydney, Australia, 30 September– 4 October 2001.