MIRAGE: A Touch Screen based Mixed Reality ... - Semantic Scholar

13 downloads 618 Views 266KB Size Report
interface, mounted on a mobile platform, and we use screen space ... the monitor and this live video image is merged with a virtual scene, forming a video ...
MIRAGE: A Touch Screen based Mixed Reality Interface for Space Planning Applications Gun A. Lee*

Hyun Kang†

Wookho Son‡

Electronics and Telecommunications Research Institute

2

ABSTRACT Space planning is one of the popular applications of VR technology including interior design, architecture design, and factory layout. In order to provide easier and efficient methods to accommodate physical objects into virtual space under plan, we suggest applying mixed reality (MR) interface. Our MR system consists of a video see-through display with a touch screen interface, mounted on a mobile platform, and we use screen space 3D manipulations to arrange virtual objects within the MR scene. Investigating the interface with our prototype implementation, we are convinced that our system will help users to design spaces in more easy and effective way. Keywords: Mixed reality, augmented reality, space planning, touch interface, plant layout. Index Terms: H.5.1 [Information Interfaces and Presentation]: Multimedia Information Systems – Artificial, augmented, and virtual realities 1

INTRODUCTION

One of the popular applications of VR technology is testing space layouts (e.g., factory layout, architecture design, and interior design) in virtual environment. Just to mention few of those, there are a number of commercial desktop 3D applications for home and office space planning such as 3D Home Architect (www.3dhaonline.com), plan3D (www.plan3d.com), InSitu (www.transtechnology.fr), IKEA Office & Home Planner (www.ikea.com), and Google Sketchup (www.sketchup.com). In industrial field, there are also professional tools available for planning factory layouts, such as DELMIA (www.delmia.com), Tecnomatix FactoryCAD (www.ugs.com) and AutoCAD (www.autodesk.com). These tools help users to try and test different cases of spatial layout within a virtual environment, and make decisions for the optimized plan. Within a virtual environment, changing virtual objects and arranging them in space are easy and inexpensive, encouraging users to try numbers of different cases. However, when there are pre-installed physical objects, users need to create 3D models of those, which is a time consuming task. Although there are 3D scanning solutions available, it is yet inexpensive to scan the whole physical space of a factory or building. In order to overcome (or lighten) such problems, we suggest to take advantage of Mixed Reality (MR) technology [5], which provides users with merged experience of virtual and real spaces. With an MR interface, users can add virtual objects onto the real world scene, and the need for modeling physical objects could be reduced drastically. *

[email protected]



hkang@ etri.re.kr



SYSTEM DESIGN

2.1 System Design Requirement We have identified four design requirements of our system according to the features of space planning task. First, the system should support live and life-size mixed reality environment, in order to help users understanding the space layout under the context of the real space and in a human-centered way. Second, the interface should support multiple audiences. Designing tasks, including space planning, usually involve multiple participants (some of them designers and some of them engineers) for reviewing the plan in various perspectives. Third, the interface should be mobile and self-contained, so that it could be easily moved around. This is important for supporting large spaces such as a big office room, a factory, or even an outdoor environment. Finally, it should be easy and comfortable to interact with, without needs for wearing special devices or placing additional equipments around. While wearable interfaces are mobile and easy to use in many cases, the need for wearing special devices (that are often not small enough) not only makes users uncomfortable, but also has problems with social acceptance and we try to avoid this. 2.2 Hardware Platform Figure 1 shows the structure of our hardware platform. In our system, a video camera is attached on the back of a flat panel touch monitor. The camera captures the real scene behind the monitor and this live video image is merged with a virtual scene, forming a video see-through [1] display configuration. A desktop monitor fits well to our requirement since it is wide enough to show big objects (such as furniture and factory equipments) and allows multiple users to share their views. Since the touch monitor and other equipments (including a computer) are too heavy for users to carry and hold, we built our system on a mobile station and attached the monitor on top of a rotational joint (similar to a tripod head). With our system, users can easily look around and observe the whole surroundings at one location, and move the system to another place if needed.

[email protected] Figure 1. Hardware platform overview

IEEE Virtual Reality 2008 8-12 March, Reno, Nevada, USA 978-1-4244-1972-2/08/$25.00 ©2008 IEEE

273

Considering that additional props [4] or wearable interfaces are avoided according to the design requirements, we’ve decided to use a touch screen interface for interaction. Not only easily integrated into our hardware platform, touch screen interface also does not require users to hold or wear additional interfaces, yet still offers easy and intuitive methods for interaction. With the touch screen interface, users can easily interact with the MR scene by tapping or dragging the virtual objects displayed on it. In comparison with previous works, our hardware configuration is similar to BOOM [2] or Planar [7] except our system provides MR visualization. Augurscope [6] is also similar to our MR configuration; however it only provides interactions for viewing the MR scene without manipulating virtual objects. A MR based planning tool Roivis, from Metaio (http://www.metaio.com), is based on photographs and has limitations with fixed viewpoints. 2.3 MR Visualization and Interaction MR visualization needs camera calibration which is a process for obtaining the internal (projection) and external (position and orientation) parameters of the camera in physical space. We use a pre-processing tool to obtain internal parameters, and external parameters are calculated in run time, based on tracking sensor data and registration information. The registration process is done by pointing on the feature points in the video image, corresponding to those in the virtual scene. We also use heuristic information, such as the height of the camera from the floor, which can be measured in advance. We are planning to improve this process with computer vision technique in the future. Since the system does not include stereoscopic display, it is easy for users to lose depth perceptions and misunderstand the 3D scene structure if it is not visualized properly. One of the problems we’ve observed was that users easily misunderstood the scene as if virtual objects were floating in the air. The major reason we’ve found was missing shadows under virtual objects. Simply adding shadows to virtual objects with semi-transparent polygons worked well for solving this problem. Another problem found was occlusion between real and virtual objects. This is a classical problem in MR visualization which emerges from lack of depth information of the physical scene. To correct this problem, we added transparent 3D models corresponding to physical objects to obtain the depth information [3]. Main tasks of planning and designing spatial arrangement require interaction methods for manipulating virtual objects in 3D space. For this, we use screen space 3D manipulation which meets our requirements and supports natural and intuitive interaction with a touch screen interface. We calculate 3D rays from 2D points on the screen where user touched, and these rays (representing the pointing direction) are used for allowing users to point on virtual objects or to drag them in the MR scene. 2D graphical user interfaces such as menu or dialog boxes are also used with the touch screen interface for making system commands or precise control of the virtual objects. 3

4

CONCLUSION AND FUTURE WORK

In this paper we proposed a mixed reality interface for space planning applications, in order to reduce efforts for modeling the physical space. According to the design requirements, we proposed a touch screen based mobile MR hardware platform. Interaction and visualization methods of our system were carefully designed to meet the requirements and help users to easily manipulate and arrange the mixed reality scene. While our prototype implementation meets the basic concepts of our design, some parts still need improvements. To improve tracking and registration qualities, we are planning to use mechanical sensors and apply computer vision technique. We are also thinking of integrating a battery power for outdoor environments. As a next step, we are planning to add 3D modeling functions to our system. Currently, our system loads 3D models created on a separate modeling tool. Allowing users to directly model 3D objects within our MR system would help users to try more diverse investigations while designing the space. ACKNOWLEDGEMENTS This work was supported by the IT R&D program of MIC/IITA. [2005-S604-04, Realistic Virtual Engineering Technology Development]

REFERENCES [1]

[2]

[3]

[4]

IMPLEMENTATION

Figure 2 shows our prototype MR system named ‘MIRAGE’. As its name implies, our system is designed to give illusions to the users as if virtual objects are present in the physical space. Our system is built on a PC platform with a 19-inch touch screen monitor and a video camera (1024x768, 30fps) attached on its back, forming a video see-through MR display. An orientation sensor from Xsens Technologies (www.xsens.com) was used for tracking the camera. In this sensor, a gyroscope, inertial and earth-magnetic field sensors are integrated, providing 3D orientation data in 120Hz update rate. The MR software is developed based on OpenSG (http://www.opensg.org) scene graph library.

274

Figure 2. MIRAGE – a prototype implementation

[5]

[6]

[7]

Ronald T. Azuma. A Survey of Augmented Reality. In Presence: Teleoperators and Virtual Environments, Vol. 6, No. 4, pages 355385, MIT Press, August 1997. Mark T. Bolas. Human Factors in the Design of an Immersive Display. In Computer Graphics and Applications, Vol. 14, No. 1, pages 55-57, IEEE, January 1994. David E. Breen, Ross T. Whitaker, Eric Rose and Mihran Tuceryan. Interactive occlusion and automatic object placement for augmented reality. In Computer Graphics Forum, Vol. 15, No. 3, pages 11-22, 1996. H. Kato, M. Billinghurst, I. Poupyrev, K. Imamoto and K. Tachibana. Virtual Object Manipulation on a Table-Top AR Environment. In Proceedings of the International Symposium on Augmented Reality 2000 (Munich, Germany, Oct.5-6, 2000), pages 111-119, 2000. Paul Milgram and Fumio Kishino. A taxonomy of mixed reality visual displays. In IEICE Transactions on Information Systems, Vol. E77-D, No.12, pages 1321-1329, December 1994. Holger Schnädelbach, Boriana Koleva, Mark Paxton, Mike Twidale, Steve Benford and Rob Anastasi. The Augurscope: Refining its Design. In Presence: Teleoperators and Virtual Environments, Vol.15, No.3, pages 278-293, MIT Press, June 2006. Ralph Schoenfelder and Frank Spenling. The Planar: An Interdisciplinary Approach to a VR Enabled Tool for Generation and Manipulation of 3D Data in Industrial Environments. In Proceedings of Virtual Reality 2005 (Bonn, Germany, Mar.12-16, 2005), pages 261-264, IEEE, 2005.