Three Cameras Method of Light Sources Extraction in ... - Springer Link

0 downloads 0 Views 498KB Size Report
namic Range (HDR) images from two digital cameras and IBL in AR environment ... Using fisheye lens (all the ultra-wide angle lenses suffer from some amount.
Three Cameras Method of Light Sources Extraction in Augmented Reality Przemyslaw Wiktor Pardel1 and Konrad Wojciechowski2 1

2

Chair of Computer Science, University of Rzesz´ ow, Poland [email protected] http://www.augmentedreality.pl/pardel Institute of Information Science, Silesian University of Technology, Poland [email protected]

Abstract. Transmission information of light sources is very important part of Augmented Reality (AR) system. Most essential aspects to make virtual objects look like real ones is lighting. To calculate adequate shadows it is necessary to know the position and shape of all light sources in the real scene. It is necessary to collect image of real environment for knowledge of geometry and information about position and shape of all light sources. In standard solution of using Image Based Lighting (IBL) in AR all images are captured from single digital camera (low dynamic range - LDR). We present research results of aspect using the High Dynamic Range (HDR) images from two digital cameras and IBL in AR environment to better lighting calculation and extraction of the strongest light sources.

1

Introduction

Augmented Reality (AR) is a field of computer research which deals with the combination of real world and computer generated data. AR allows the user to see the real world, with virtual objects superimposed upon or composited with the real world [3]. The beginnings of AR, date back to Ivan Sutherland’s work in the 1960s (Harvard University and University Utah). He used a seethrough HMD to present 3D graphics. Milgram et al 1994, introduce the reality-virtuality continuum that defines the term mixed reality and portrays the ,,link” between the real and the virtual world (Fig. 1). If the real world is at one of the ends of the continuum and VR (i.e. computergenerated, artificial world) is at the other end, then the AR occupies the space closer to the real world. The closer a system is to the VR end, the more the real elements reduce [13]. AR can potentially apply to all senses, including hearing, touch, and smell. Certain AR applications also require removing real objects from the perceived environment, in addition to adding virtual objects [2]. AR systems have been already used in many applications as military, emergency services (enemy locations, fire cells), prospecting in hydrology, ecology, geology (interactive terrain L. Bolc et al. (Eds.): ICCVG 2010, Part II, LNCS 6375, pp. 183–192, 2010. c Springer-Verlag Berlin Heidelberg 2010 

184

P.W. Pardel and K. Wojciechowski

Fig. 1. Reality-Virtuality continuum

analysis), visualization of architecture (resurrect destroyed historic buildings), enhanced sightseeing, simulation (flight and driving simulators), virtual conferences, entertainment and education (game eg. ARQuake).

Fig. 2. Virtual reconstruction of destroyed historic building (Olympia, Greece) (source: official project site ARCHEOGUIDE, http://archeoguide.intranet.gr)

Main claims put for AR systems: – – – – – – – – –

Combines real and virtual objects in a real environment; Runs interactively, and in real time; Registers (aligns) real and virtual objects with each other; Virtual augmentation should be imperceptible to real object; Real objects can be augmented by virtual annotations; Virtual augmentation can be without limitations viewed and examined; User should have capability of simple entrance and omission of system; Multiple users can see each other and cooperate in a natural way; Each user controls his own independent viewpoint;

All AR systems should to aim to realise this claims. It is important to draw attention to fact that all this claims are very hard to realize or sometimes impossible to realize with using current methods and technology.

2

Difficulties of Transmitting Information of Light Sources from Reality to Augmented Reality

In Virtual Reality (VR) environment information about all light sources (position, shape, brightness, etc.) is defined by environment creator. In AR environment this information are taken directly from real environment. It is not easy task, but transmission information of light sources is very important part of AR

Three Cameras Method of Light Sources Extraction in Augmented Reality

185

system, because most essential aspects to make virtual objects look like real ones is lighting. To calculate adequate shadows it is necessary to know the position and shape of all light sources in the real scene.

Fig. 3. Soft shadows generated with using ”Smoothies” algorithm (source: [4])

Difficulties of transmitting information of light sources: – The geometry of the real environment is usually not known; – We do not have information about position and shape of all light sources.

2.1

Capture the Real Environment Image Methods

It is necessary to collect image of real environment for knowledge of geometry and information about position and shape of all light sources. From many used methods most important are: – Capture image of mirrored sphere (In its reflection nearly the entire environment can be seen); – Merging multiple images into single image; – Using fisheye lens (all the ultra-wide angle lenses suffer from some amount of barrel distortion); – Capture only selected part of environment (e.g. ceiling); – Using special rotating cameras. Currently Used Methods (Mirrored Sphere): One Camera Method. This is very popular method because only one camera is needed to observe environment and capture real environment image (Figure 5). Camera captures a video of the scene which afterwards is overlaid with virtual objects. In this method a mirrored sphere is used which reflects the environment. The sphere

Fig. 4. Mirrored sphere image and its environment image (source: [12])

186

P.W. Pardel and K. Wojciechowski

Fig. 5. One camera method

is in a fixed position relative to a marker of a visual tracking system. Marker is a characteristic tag, placed in visible place of scene, used in AR systems to compute virtual object positions. This method have lot of disadvantages like: – complicated extraction of mirror sphere from scene image – bad resolution of extracted mirrored sphere – mirrored sphere must be placed within observed scene Two Cameras Method. This method is free from most of disadvantages One Camera Method. Reduction of this disadvantages was obtained by using two cameras (two devices: first to capture real environment image and second to observe environment). In standard setup (Figure 6) first camera capture real environment image (mirrored sphere or fiseye lens (Figure 7)). Second camera captures a video of the scene which afterwards is overlaid with virtual objects.

Fig. 6. Two cameras method

Three Cameras Method of Light Sources Extraction in Augmented Reality

187

Fig. 7. Fisheye lens

2.2

High Dynamic Range Images (HDR)

Real-world scenes usually exhibit a range of brightness levels greatly exceeding what can be accurately represented with 8-bit per-channel images [5]. High dynamic range imaging (HDRI or just HDR) is a set of techniques that allow a greater dynamic range of luminances between the lightest and darkest areas of an image than standard digital imaging techniques or photographic methods merging of multiple photographs, which in turn are known as low dynamic range (LDR)[5] or standard dynamic range (SDR)[11] images. High-dynamic-range photographs are generally achieved by capturing multiple standard photographs, often using exposure bracketing, and then merging them into an HDR image

Fig. 8. HDR creation

3

Proposal of a New Method: Three Cameras Method

In this method two cameras capture real environment image (mirrored sphere or fiseye lens). It belongs to modify view parameters (light filters, exposure times, bracketing), in order to collect series of image with different exposure. Images from two cameras was taken advantages to create HDR image which is used as a environment image. Third camera captures a video of the scene which afterwards is overlaid with virtual objects.

188

P.W. Pardel and K. Wojciechowski

Fig. 9. Three cameras method

3.1

Hypothesis

Three cameras method give better light sources prediction. Inspiration to frame a hypothesis: Due to the limited dynamic range of digital cameras it is very likely that a lot of pixels have the same value (pure white), although the real light emission of the objects differs greatly: – Bulb placed on window in sunny day, – Two sources with considerable difference between brightness neighbourhood placed (Figure 10).

Fig. 10. Bulb placed on window in sunny day and five light sources neighbourhood placed

3.2

Experiment and Experimental Proof of Investigative Hypothesis in AR Research

Experiments goals is to test earlier framed hypothesis, which has been confirmed in result of experiment. The Results of this experiments was HDR images (Figure 12), which advanced analysis with using Light Mask [1] technique allow test the hypothesis.

Three Cameras Method of Light Sources Extraction in Augmented Reality

189

Fig. 11. Scheme of experiment

Fig. 12. Results of experiment

Experimental proof applies in natural sciences, in AR was use to verification of hypothesis as using Image Based Lighting (IBL)[9] and image based shadows in real time AR environment[6]. 3.3

Experiment Conclusion

Incrementation of amount of LDR image taken to create HDR is not tantamount to growth the knowledge about position and shape of all light sources in the real scene. Most important factor is exposures of images taken to create HDR. – Number of image taken to create HDR environment map don’t influence proportional to the knowledge about position and shape of all light sources in the real scene – After proper selecting sequence of Low Dynamic Range images (LDR)[1], the knowledge about position and shape of all light sources in the real scene

190

P.W. Pardel and K. Wojciechowski

Fig. 13. Image taken to experiment

Fig. 14. HDR created from two LDR images

Fig. 15. Light Mask

gotten from HDR created from 2 LDR images is comparable to the knowledge got from HDR created from 3 ore more LDR images Depending on kind of image taken to create HDR, knowledge about position and shape of all light sources in the real scene is significantly different

Three Cameras Method of Light Sources Extraction in Augmented Reality

191

– Knowledge about position and shape of all light sources in the real scene got from HDR created from 2 or more LDR is biggest as one of image is significantly underexposed Benefits and Limitations of Methods. Benefits of methods: – Increase of knowledge about position and shape of all light sources in the real scene Limitations of methods and problems with its application: – Using multi-camera systems (2 camera) - Calibration (Cameras observe ball from different angle) – It is indispensable for obtainment of maximum efficiency of method of three camera setup individually cameras exposition for scene

Fig. 16. Different camera angle observation

4

Conclusions and Future Work

New method was presented which give better light sources prediction in real environment. Usefulness of new method was experimentally confirmed. However, some problems with its application have appeared related to using multi-camera systems (calibration). Next work are planned with improvement this method at employment of other technology: – Apply adaptive light filters – Apply artificial intelligence to light sources prediction – Using one camera to create HDR with using authomatic system to change light filter

References 1. Aky¨ uz, A.O., Reinhard, E.: Color appearance in high-dynamic-range imaging. SPIE Journal of Electronic Imaging 15(3) (2006) 2. Azuma, R., Baillot, Y., Behringer, R., Feiner, S., Julier, S., MacIntyre, B.: Recent advances in augmented reality. IEEE Computer Graphics and Applications 21(6), 34–47 (2001)

192

P.W. Pardel and K. Wojciechowski

3. Azuma, R.T.: A survey of augmented reality. In: Computer Graphics, SIGGRAPH 1995 Proceedings, Course Notes no.9: Developing Advanced Virtual Reality Applications, pp. 1–38 (August 1995) 4. Chan, E., Durand, F.: Rendering fake soft shadows with smoothies. In: Proceedings of the Eurographics Symposium on Rendering, pp. 208–218. Eurographics Association (2003) 5. Cohen, J., Tchou, C., Hawkins, T., Debevec, P.E.: Real-time high dynamic range texture mapping. In: Proceedings of the 12th Eurographics Workshop on Rendering Techniques, London, UK, pp. 313–320. Springer, Heidelberg (2001) 6. Haller, M., Supan, P., Stuppacher, I.: Image based shadowing in real-time augmented reality. International Journal of Virtual Reality (2006) 7. Milgram, P., Kishino, F.: A taxonomy of mixed reality visual displays. IEICE Transactions on Information Systems E77-D(12) (December 1994) 8. Milgram, P., Takemura, H., Utsumi, A., Kishino, F.: Augmented reality: a class of displays on the reality-virtuality continuum. In: Das, H. (ed.) Presented at the Society of Photo-Optical Instrumentation Engineers (SPIE) Conference. Proc. SPIE Telemanipulator and Telepresence Technologies, vol. 2351, pp. 282–292 (1995) 9. Supan, P., Stuppacher, I.: Interactive image based lighting in augmented reality. In: Digital Media. Upper Austria University of Applied Sciences, Hagenberg/Austria (2006) 10. Sutherland, I.E.: A head-mounted three dimensional display. In: Proceedings of the AFIPS 1968 (Fall, part I), fall joint computer conference, part I, December 9-11, pp. 757–764. ACM, New York (1968) 11. Vonikakis, V., Andreadis, I.: Fast automatic compensation of under/over- exposured image regions. In: Mery, D., Rueda, L. (eds.) PSIVT 2007. LNCS, vol. 4872, pp. 510–521. Springer, Heidelberg (2007) 12. Witte, K.: How to shoot a chrome ball for hdri (2009) 13. Zlatanova, S.: Augmented reality technology (2002)