Interactive Mobile 3D Graphics for On-the-go Visualization ... - CiteSeerX

22 downloads 8054 Views 913KB Size Report
Developing interactive 3D graphics for mobile Java applications is now a reality. ... desktop computers as the predominant location for users to visualize and ...
Interactive Mobile 3D Graphics for On-the-go Visualization and Walkthroughs Maria Andréia F. Rodrigues

Rafael G. Barbosa

Nabor C. Mendonça

Mestrado em Informática Aplicada Mestrado em Informática Aplicada Mestrado em Informática Aplicada Universidade de Fortaleza - UNIFOR Universidade de Fortaleza - UNIFOR Universidade de Fortaleza - UNIFOR Av. Washington Soares 1321, J(30) Av. Washington Soares 1321, J(30) Av. Washington Soares 1321, J(30) 60811-905 Fortaleza-CE Brazil 60811-905 Fortaleza-CE Brazil 60811-905 Fortaleza-CE Brazil Tel.: +55 85 3477-3268 Tel.: +55 85 3477-3268 Tel.: +55 85 3477-3268

[email protected]

[email protected]

ABSTRACT Developing interactive 3D graphics for mobile Java applications is now a reality. Recently, the Mobile 3D Graphics (M3G) API (also known as JSR-184) was proposed to provide an efficient 3D graphics environment suitable for the J2ME platform. However, new services and applications using interactive 3D graphics, which have already achieved reasonable standards on the desktop, do not exist for resource-constrained handheld devices yet. In this work, a generic architecture for visualizing and navigating through 3D worlds in a mobile setting was designed and implemented. In particular, a 3D virtual tour application was developed based on the proposed architecture, where multiple mobile clients using M3G navigate through and interact with each other in a shared 3D space.

Categories and Subject Descriptors I.3.8 and I.3.2 [Computer Graphics]: Applications and Graphics Systems – Distributed/network graphics, respectively.

Keywords Mobile 3D Graphics, Collaborative Architecture, Virtual Environments, Handheld Computing Application. 1. INTRODUCTION Given current technology trends, mobile communication devices and 3D applications are playing an increasingly important role in the development of services for handheld devices, particularly because interactive 3D environments and participative services are essential to satisfy the needs of today´s users. Consequently, it is expected that handheld devices will be soon replacing the desktop computers as the predominant location for users to visualize and walkthrough 3D virtual environments. The motivation behind it is the fact that these devices and applications have been primarily designed to increase efficiency and productivity for people on the go [11]. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. SAC’06, April, 23-27, 2006, Dijon, France. Copyright 2006 ACM 1-59593-108-2/06/0004…$5.00.

[email protected]

Despite of innovative advances in computing capability and wireless communication services, mobile systems have still some limitations: they are typically slow, unreliable, and the user interface is restricted [9]. When comparing to current desktop computers, a significant problem with mobile graphical applications is to strive for an implementation as compact as possible limited by the amount of memory available and the low processing capabilities. Further, new services and applications using 3D graphics and interactions, which have already achieved reasonable standards on the desktop [5, 17], do not exist for resource-constrained handheld computing devices yet. In this respect, there is an evident need for the development of 3D interactive environments on high capacity servers that can be accessed, visualized and navigated by remote mobile users on the go. Recently, the Mobile 3D Graphics (M3G) API (also known as JSR-184) was proposed to provide an efficient 3D graphics environment suitable for the J2ME platform [13, 14]. This means that developing interactive 3D graphics for mobile Java applications is now a reality. Consequently, it is naturally expected that handheld device users will require increasingly sophisticated interactive 3D services, allowing them, for instance, to visualize and navigate through complex 3D spaces in virtual tour applications while they are on the go. Examples of such handheld computing applications include virtual museums, galleries, shopping centers and universities, where users can take advantage of their mobile devices to guide them while they physically walk through those environments, and also to provide them with detailed information about the places, items and services available at each visited location. Several other types of mobile applications can also benefit from M3G, including games, map visualization, user interfaces, animated messages, product visualization, and screen savers. In this work, a generic architecture for visualizing and navigating through 3D graphics scenes in a mobile setting was designed and implemented. In particular, a 3D virtual museum application was developed based on the proposed architecture, where multiple mobile clients can navigate through and interact with each other in a shared 3D space. The rest of paper is organized as follows. The next section gives an overview of the M3G API. Sections 3-5 describe our proposed architecture and the developed virtual museum application in details. Section 6 concludes the paper with a summary of our research and suggestion for future work.

2. RELATED WORK Virtual tours through museums, shopping centers, universities or even entire cities have required the development of sophisticated applications that allow users to interactively navigate and explore their surrounding environment, for instance, to acquire detailed information about selected items of interest. In terms of positioning, we can define two types of virtual environments: indoor [1, 3] and outdoor [1, 3, 7, 10]. In this work, we have developed a handheld computing application aimed at simulating an indoor space, whose main focus is on allowing collaborative visualization without necessarily keeping track of the users’ current positioning (context). However, our proposed architecture is flexible enough to be easily extended to support outdoor spaces. Several works have proposed approaches to navigation and transmission of 3D scenes using client-server architectures [3, 7, 10]. Some authors recognize that the 3D information should be provided on demand and relative to the position/orientation of the user, although many of them do not address the problem of clientserver communication, with the 3D application being executed exclusively in the handheld device [1, 8]. This work, in addition to implement client-server communication between mobile devices (clients) and a server, offers the possibility of direct communication among clients themselves, for example, to request a specific 3D scene to be explored by the local application, thus providing the mobile devices with greater autonomy from the server. Instead of developing applications aimed at the visualization of simple 2D maps, text or html pages [1, 3], the proposed architecture offers support to realistic 3D scenes, similar to those typically found in virtual environments available for personal desktop computers [5, 17]. Most related work in this area does not support the creation of sophisticated 3D environments, neither their visualization and exploration [7, 10]. In other words, the level of interactivity in those systems is still too restricted (an exception is the work described in [10], where parts of a scene can be selected). The work proposed by Laakso et al. is the closest to ours, since in that the users (in the role of virtual tourists) can also use a 3D map to navigate, explore and obtain tourist information from the environment (a virtual city) they are visiting [8]. However, our work differs from that of Laakso et al. in that it allows collaboration among multiple mobile clients (handheld devices) sharing the same virtual environment. In examining the few existing approaches similar to our work, we have reached the conclusion that none is fully functional yet [7]. Besides, they do not use recent 3D technologies (e.g. M3G) to the generation of 3D environments.

3. ARCHITECTURE OVERVIEW Our proposed architecture is fundamentally based on the principle of separation of data, presentation, and interaction mechanisms, using the Model-View-Controller (MVC) architectural pattern [2] (see Figure 1). The lower-level layer of the architecture encapsulates all the details of the network protocols used by each device to communicate with other devices in its vicinity (via an ad hoc network protocol such as Bluetooth [15], an emerging standard for wireless integration of small devices) or with mobile service providers in the fixed network. The MVC pattern is used to break the application into three main parts: the M3G Model, the Views (or Viewports, represented by the Local Visualization and by the

Views Local Visualization

Remote Visualizations

Device Controller

M3G Communication Figure 1. The system architecture based on the MVC software pattern. Remote Visualizations of each mobile device), and the Device Controller. The M3G Model manages the data and behaviour of the 3D graphical objects that are part of the application domain; responds to queries about its state (from the Views) and notifies remote observers when that information changes; and reacts to instructions to change its state (from the Device Controller). This means that the M3G Model encapsulates not only the state of the system, but also how the system works. In particular, all the graphical objects that compose a 3D scene in given device are created and managed as part of the device’s M3G Model component, which is also responsible for updating the device’s Views whenever it is necessary. The relationship between the M3G Model and its Views is implemented following the Observer design pattern [2], in which the M3G Model plays the role of the observable component and the Views play the role of the M3G Model’s observers. This means that each View should register its events of interest with the M3G Model, which then notifies the appropriate Views every time an event of interest occurs. Any updating in the current state of the M3G Model should be registered as an event of interest in the Local Visualization. Another important role of the M3G Model is to verify and respond to potential geometric collisions among the graphical objects of a scene. Depending on the collision result, the M3G Model may take different actions (e.g., not updating its Views when a requested object transformation is found to violate collision constrains). Each View (or Viewport) typically has a one to one correspondence with a display surface, and knows how to render 3D scene graph objects to it. In the proposed architecture, the Views are responsible for mapping graphics onto the devices’ display in two different ways: through the Local Visualization of the application canvas and through the Remote Visualizations of the 2D cell’s images, which are sent to remote mobile devices on-demand. When the state of the M3G Model changes, the Local Visualization is automatically updated, redrawing the affected canvas to reflect those changes. The Views attach to a M3G Model and render their contents to the display surface. There can be multiple Viewports onto the same M3G Model and each of these Viewports can render the contents of the M3G Model to different remote display surfaces. The Device Controller interprets keyboard inputs from the device user and maps these user actions into commands that are passed to the M3G Model to effect the appropriate change. In fact, the Device Controller is the means by which the device user interacts with the application. For example, if the user presses a device button (e.g., to issue a walkthrough command like “move the object forwards”) or selects a menu item (e.g., to issue a

visualization command like “show the next view of a cell”), the Device Controller is responsible for translating this action into invocations of the M3G Model’s operations. The communication layer is the means through which the M3G Model exchanges graphical information with the M3G Model component of other devices. Furthermore, the communication layer is also responsible for data communication with a world server, where the mobile devices should search for a specific cell, among a set of cells that compose the 3D virtual world provided. Each scene of the 3D world has a scene process that manages one 3D environment and the users, which are connected to the scene. Each user has its own process, managing the network connection and data transfer. When a client connects to a scene, it will transmit information about its capabilities over the communication connection. In addition, the server starts a bandwidth test in order to determine the capabilities of the network between server and client [12]. The goal of the server is to select a transmission sequence of object representations that will provide the highest rendering quality throughout the walkthrough, subject to the limitations imposed by the available bandwidth and device capacities [16]. The chosen cell will be loaded into the M3G Model and visualized through the Views. In particular, by the Local Visualization, represented by the view produced by a local camera on the cell, and by the Remote Visualizations, represented by the Views produced by a list of cameras presented on the cell (as many as defined in the original cell extracted from the central server and loaded on the remote devices as graphics elements of the scene). The communication protocol used incorporates messages that let the client inform the server if the user has moved. The client periodically transmits to the server the user’s current viewing parameters, including viewpoint velocity and acceleration. To make geometric data available in a timely fashion, downloads are initiated some time before the data is actually needed. As the client knows in advance which objects to download, a larger AOI radius is considered for prefetching than for rendering [6]. Currently, the network technologies used in our implementation are Bluetooth [15], for data transmission among mobile devices, and Wi-Fi (or wireless TCP/IP), for establishing a network connection between a mobile device and the world server. Therefore, the context in which the users of handheld devices are depends on their proximity (Bluetooth). However, it is expected that new data transmission technologies will emerge in the near future providing the users a broader context. Our implementation takes this possibility into account by offering certain flexibility and independence of the network technology used, hiding from the M3G Model communication details specific to a networking technology.

4. FUNCTIONAL ASPECTS In the following, we discuss some specific issues that may apply to any of the n remotely connected devices. For the sake of simplification, we focus our discussion on two particular devices (say i and j). Initially, we assume that the mobile device users are only allowed to visualize and navigate through a 3D virtual world whose geometry is stored in a centralized server in the form of a graph of interconnected cells. This means that the users cannot modify any of the cells as they navigate through them. In addition, each 3D cell is defined with a fixed set of cameras.

We define the figure of a master handheld device for each network created according to the following election rule: the handheld devices are classified into device categories and sorted by their memory and processor profiles, the sorted list is then placed on a master’s devices stack (referred to in this work as the mastersStack). We assume that all the handheld devices can successfully communicate with each other or, at least, with the master one. In particular, the master device is responsible for the local control of certain events occurred remotely (e.g., for the treatment of collisions among the static and dynamic geometric elements of a cell, the place where the dynamic objects driven by the mobile users are navigating). The users´s viewpoint can be modified during movements of translation and rotation through combinations of users’s mobile device control keys. As soon as the mobile device establishes a connection with the world server, a specific cell is loaded into it. Alternatively, modest client dependence is also possible among mobile devices. For example, new mobile users may optionally receive a cell (a M3G file) on demand, from a neighbour device, instead from the central server. When working with the mobile application at hand, several execution scenarios can be identified: A. The mobile application is started by device i (no other device is within its communication range, which makes the device its own master). In this scenario (Figure 2.a), the following steps are executed: 1.

2. 3.

the mobile application sets up the initial configuration of the application’s architectural elements (Local View, Model, Controller and Communication Layer, displayed in Figure 1), the Model defines the current device (i.e., i) as its own master, placing it on top of the stack of potential masters, the device i requests a partition of the world to central server which successfully replies.

B. The mobile application is started by device j, which successfully joins an already existing community of devices connected via an ad hoc network. Once a connection is established to device j, the master of the existing community of devices contacts the new device to check whether it is interested in receiving a remote view of the 3D scene (cell), as shown in b) of Figure 2. If the new device responds positively: 1. 2.

the master device takes a list of images of the local cell using the portal cameras list, the master device sends the images to device j.

At this point, the user of device j has two choices: 3. it can either discard the images and request a new cell from the central server (thus returning to case A), or 4. it can choose to navigate a cell portrayed in the list of images. In the fourth case, the device may either request the cell from the central server, landing in a situation similar to that described in case A, or from the master device. Afterwards, the device j will inform all the other devices in the ad hoc network about its memory and processor profiles, so that they can update their mastersStacks. The master will be the only device to respond to device j with its own updated mastersStack.

C. The device i triggers a navigation event in the cell (as the master device). At this stage, all ad hoc network devices already have replicas of each other’s objects and visualize the same cell (Figure 2.c). In particular, the navigation event is triggered by the Master Device Controller and communicated to the its local Model that, in turn, checks whether this event can actually take place. For instance, suppose that the navigation event represents an object translation, under the control of device i, from coordinates (x,y,z) towards a given direction within the cell which is being simultaneously visualized by the other device´s users. In this situation, the object under control of device i may collide with other objects within the same cell, or with any other graphics elements of the 3D scene. In the case of a potential collision is detected, the Model component will not authorize the translation transformation. Otherwise, the Model will: 1. 2. 3.

apply the corresponding geometric transformation, update the View, and send the geometric transformation to the other mobile devices.

device i

(a)

device j (b) device i (c)

device j (d)

D. The device j triggers a navigation event in the cell (without being the master device). In this case, it is only necessary that the device j sends the event to the master device, which will handle it in a way similar to the case C (Figure 2.d). The only difference here is the fact that the event was triggered by a remote device, instead of by the master. E. The master device is no longer available in the ad hoc network. When a device detects that the current master device is no longer responding, it will remove the master from the top of its mastersStack. The new master device will be the one that now sits on top of the mastersStack (Figure 2.e). In a mobile environment, dealing with fault-tolerant aspects is a major concern. Our architecture was designed to tolerate certain kinds of communication failures by taking advantage of the devices' local processing and communication capabilities whenever possible. In particular, by allowing a new 3D scene to be loaded from a neighbor device the architecture can tolerate the failure to communicate with the world server. In addition, by keeping all rendering and view update activities as local tasks the architecture minimizes the number of messages that have to be exchanged with the master device, thus being less dependent on the underlying wireless communication infrastructure. The communication details between the mobile application and the central server, as well as the details of the search mechanism used for selecting an appropriate 3D scene, are beyond the scope of this paper and were deliberately left out of the current discussion. They will be addressed in a future work.

5. A VIRTUAL MUSEUM APPLICATION In order to illustrate the main functionalities of our architectural components, in this section we describe a 3D virtual museum application developed based on the architecture we presented. The application was tested on J2ME emulator. Like in other recent

(e) master device world request

data sending triggers event

Figure 2: Execution scenarios of the mobile application. researches in the field, our motivation is that mobile users can move through time and space and, in this way, interactively experience the architecture, sculptures and paintings from a particular period, taking a virtual view at museums [3]. Visitors to our virtual museum can use their handheld devices to create a route through the museum alone or with the help of a master device guide, teachers can assist their students during a museum excursion, such that mobile users can interact with each other to experience or solve a group-based task (e.g., to find out a specific sculpture and inform the others). Users may have access to additional information through the visualization of virtual 3D scenes that they are interactively exploring and may request information on the objects of interest by directly pointing to their representation in the 3D virtual world. Additionally, they can change positioning and zooming into a particular viewpoint, for example, to inspect more closely some famous paintings, and take a screenshot of them. Alternatively, they can request pictures of different rooms of the virtual museum to look for some piece of art or person, for instance. Further, the users can share these pictures with a group collaboratively.

world server

master device

Figure 3. Shared 3D space where multiple mobile clients can navigate through and interact with each other.

In our application, cells correspond to the rooms of the virtual museum building. The portals likewise correspond to the doors through which neighboring virtual museum rooms can connect to each other. Several cameras are attached to the rooms (cells) and multiple views can be dispatched to any mobile client. This is illustrated on Figure 3, where two ad hoc networks are formed in parallel. The master device on the right sends its current cell’s view to the master on the left (see the image shown at the subwindow located at the top right of the mobile phone, on the left side of this figure). On the right, three mobile devices users are visualizing and navigating through the same cell, originally stored in a central server (see the right side of Figure 3). On the left, one mobile device is visualizing and navigating through another cell (see the left side of Figure 3). In this case, no other device is within its communication range, which makes the device its own master.

6. CONCLUSIONS AND FUTURE WORK This work presented a generic architecture that relies on the M3G API for on-the-go visualization and navigation through 3D virtual worlds in a mobile setting. As a validation effort, a virtual tour application was developed based on the proposed architecture, and the results show that multiple mobile clients using M3G can

walkthrough and interact with each other in a shared 3D space successfully. The M3G proved capable of representing realistic visualization (3D scenes with high quality graphics with a reasonable level of detail, including several mesh objects composed of more than one thousand vertexes with high resolution textures). Our prototype can be used for a number of tasks: to guide users while they physically walk through those environments, and also to provide them with detailed information about the places, items and services available at each visited location. As future work, we anticipate for the users of handheld devices the possibility of changing the geometry and the attributes of a cell (either individually or collaboratively). Also, the proposed architecture has not yet been used systematically nor completely implemented to explore all the aspects of visualizing and exploring virtual worlds in a highly dynamic mobile setting. As one example, we need to incorporate mechanisms for automatically partitioning virtual worlds into 3D cells [4]. Also, we aim at applying a detailed usability analysis to investigate the level of interactivity that can be achieved with complex 3D graphics elements in mobile Java applications using M3G. Finally, the proposed architecture is flexible and is designed in a way that permits new improvements in the implementation by

extending the 3D graphics methods and classes currently available on the M3G API. In this context, an interesting extension of this work would be to evolve the current implementation into a framework.

7. ACKNOWLEDGMENTS The authors are grateful to the Brazilian supporting agency. In particular, Rafael G. Barbosa benefits of a CAPES studentship, under grant No. 22002014.

8. REFERENCES [1] Abowd, D. A., Atkeson, C. G., Hong, J., Long, S., Kooper R., and Pinkerton, M. Cyberguide: a mobile context-aware tour guide. Wireless Neworks., 3, 5 (Oct. 1997), 421–433. [2] Buschmann, F., Meunier, R., Rohnert, H., Sommerlad, P., Stal, M. Pattern-Oriented Software Architecture, Volume1: A System of Patterns. Wiley, Chichester, England, 1996. [3] Chou, L.-D., Wu, C.-H., Ho, S.-P., Lee, C.-C. Position-aware multimedia mobile learning systems in museums. In Proeedings of the International Conference on Web-Based Education (Innsbruck, Austria, Feb. 16-18, 2004). [4] Cohen-Or, D., Chrysanthou, Y., Silva, C.T., Durand, F. A survey of visibility for walkthrough applications. IEEE Transactions on Visualization and Computer Graphics, 9, 3 (Jul. 2003), 412-431. [5] Di Blas, N., Hazan, S., Paolini, P. The SEE experience. Edutainment in 3D virtual worlds. In Museums and the Web 2003 (Charlotte, NC, Mar. 19-22, 2003). Archives & Museum Informatics, 2003. [6] Hesina, G., Schmalstieg, D. A network architecture for remote rendering. In Proceedings of the 2ndInternational Workshop on Distributed Interactive Simulation and RealTime Applications (Montreal, Canada, Jul. 19-20, 1998). IEEE CS Press, 1998, 88-91. [7] Krum, D. M., Ribarsky, W., Hodges, L. Collaboration Infrastructure for a Mobile Situational Visualization System. Available at www.cc.gatech.edu/~dkrum/ papers/krum.collab.pdf. Visited on 06/07/05. [8] Laakso, K., Gjesdal, O., Sulebak, J. R. Tourist information and navigation support by using 3D maps displayed on

mobile devices. In Proeedings of the HCI in Mobile Guides, Workshop at Mobile HCI (Udine, Italy, Sep. 8-11, 2003). [9] Lee, S., Ko, S., Fox, G. Adapting content for mobile devices in heterogeneous collaboration environments. In Proceedings of the International Conference on Wireless Networks (Las Vegas, USA, Jun. 23-26, 2003). CSREA Press, 2003, 211217. [10] Raposo, A. B., Neumann, L., Magalhaes, L. P., Ricarte, I. L. M. Visualization in a mobile WWW environment. WebNet'97- World Conference of the WWW, Internet, and Intranet. Toronto, Canada, 1997. [11] Smith, I. Social mobile applications. Computer, IEEE CS Press, 38, 4 (Apr. 2005), 84-87. [12] Soetebier, I., Birthelmer, H., Sahm, J. Client server infrastructure for interactive 3D multi-user environments. In Proceedings of the WSCG (Posters (Plzen, Czech Republic, Feb. 2-6, 2004). UNION Agency – Science Press, 2004, 165168. [13] Sun Microsystems. Java 2, Micro Edition (J2ME) Wireless Toolkit 2.2. Available at http://java.sun.com/products/ j2mewtoolkit/download2_2.html. Visited on 12/05/2005. [14] Sun Microsystems. JSR-184: Mobile 3D Graphics API for J2ME. December 2003. Available at http://jcp.org/ aboutJava/ communityprocess/final/jsr184/index.html. Vis. on 12/05/05. [15] Sun Microsystems. JSR 82: JavaTM APIs for Bluetooth. Available at http://www.jcp.org/en/jsr/ detail?id=82. Visited on 15/05/05. [16] Teler, E., Lischinski, D. Streaming of complex 3D scenes for remote walkthroughs. In Proceedings of the EUROGRAPHICS (UK, Sep. 4-7, 2001). Blackwell Publishers, 20, 3 (2001). [17] Wojciechowski, R., Walczak, K., White, M., Cellary, W., Building virtual and augmented reality museum exhibitions. In Proceedings. of the 9th International Conference on 3D Web Technology Monterey, (California, Apr., 2004). ACM Press, 2004, 135-144.