Interactive Visualization Of Ocean Circulation Models - CiteSeerX

10 downloads 11338 Views 385KB Size Report
This paper describes the conversion .... height. A wireframe grid was applied to the sea floor to em- ... [2] http://www.ncsa.uiuc.edu/evl/docs/html/homepage.html.
Interactive Visualization Of Ocean Circulation Models Scott Nations, Robert Moorhead, Kelly Gaither, Steve Aukstakalnis, Rhonda Vickery, Warren Carl Couvillion Jr. Mississippi State University 

Abstract Visualization of computational oceanography is traditionally a post-processing step. This batch orientation is clumsy if one wants to observe the effect of a wide range of parameters on the solution. This paper describes the conversion of an ocean circulation model from this traditional design to an interactive program in which the computed solution is viewed in real-time over a wide-area network and the user is given the ability to change the model parameters and immediately observe the impact this has on the solution.

1 Introduction The Naval Research Laboratory (NRL) at Stennis Space Center, Mississippi, has one of the leading efforts in computational oceanography [3]. While working with a model of the Sea of Japan (SOJ) it became clear that the location of a major flow feature, the polar front, was significantly affected by the flow rate and angle at which the ocean current enters the Sea through the Tsushima Strait. To properly explore this relationship the SOJ model needed to be run with a large number of different flow angles and rates. However, the existing software was batchoriented. The ocean model and visualization program were separate jobs which communicated through data files, and the images created were written to disk for later viewing by the scientist. This process was unsatisfactory for the task at hand. An interactive model was needed, one which displays the solution as it is generated and which gives the scientist the ability to modify the relevant parameters and immediately observe the effects of the changes on the solution. Researchers in the Scientific Visualization Thrust of the NSF Engineering Research Center at Mississippi State University already had experience writing software for visualizing the output from NRL’s ocean models [4]. However, all of the previous work had been on tools for post-processing of the data. The new computing model required many of the same visualization techniques, but in a system in which the  NSF Engineering Research Center for Computational Field Simulation, Mississippi State University, Mississippi State, MS, 39762 y Naval Research Laboratory, John C. Stennis Space Center, MS, 395295004

Daniel N. Fox, Peter Flynn, Alan Wallcraft, Patrick Hogan, Ole Martin Smedstad Naval Research Laboratory y

flow of control of the software and the trade-offs between speed and generality had largely been reversed.

2 Design There were a number of other issues in addition to the realtime, interactive constraints. The ocean model runs on a Cray C90 and the visualization would be done using a Silicon Graphics workstation, so the final system had to be a distributed application running on a network of heterogeneous computers. The program needed to support a wide variety of display devices, from a standard workstation monitor to the latest in virtual reality technology. Modifications to the SOJ ocean model code had to be held to a minimum so that the changes could later be duplicated in other NRL models. Finally, there was a limited amount of resources to devote to the project so the final plan had to minimize programming effort by using existing software and libraries as much as possible. The final design of our system, cthru, consists of three parts: the ocean model, the extraction task, and the display program. The ocean model produces circulation data and passes it to the extraction task. Experience has shown that the NRL scientists are interested in viewing not only the raw data but a long list of derived quantities as well. The extraction task derives these values and generates any graphics extracts that can be offloaded from the display routines. This new data is then sent to the display program to be used in building the final image. Periodically, the ocean model queries the display program for any parameters which may have been changed by the user.

3 Implementation The Message Passing Interface (MPI) is used for all communication between the three tasks [1]. MPI is an open standard which has many well-supported and freely available public domain implementations. The applications programmer is provided with a very high-level programming interface which hides the details of communication between different computers. In the final program, execution of the various parts of the system can be moved from one computer to another by making simple changes to a parameter file. This

has proved to be a useful capability, as the software has been used at a number of different sites which each have their own unique set of available machines. The existing ocean model was modified to use MPI to send its output directly to the extraction task rather than to disk files. The ocean model code is already structured to support a variety of different output types. Adding support for MPI output as a new type had virtually no impact on the code. For development purposes a “fake” ocean model was written which reads precomputed data from disk files and sends it to the extraction task. This proved to be very beneficial during development of the rest of the system. Switching between the real ocean model and the fake model is once again accomplished through minor changes to a parameter file. The extraction task accepts raw data from the ocean model and computes any other values which may be needed by the display program. Currently, the raw data as well as the extracts are then passed along to the display program. In the future only the data which is needed for the current visualizations being performed will be sent, hopefully reducing the bandwidth needed along this portion of the communications channel. The display program is written using the Cave Automatic Virtual Environment (CAVE) library developed by the Electronic Visualization Laboratory at the University of Illinois, Chicago [2]. The same copy of cthru has been used on a standard 2D workstation monitor, on a standard 2D workstation monitor outfitted with an electro-optical 3D display capability (CrystalEyes), on an ImmersaDesk, on a Fakespace BOOM, and in the CAVE virtual reality theater, switching between them with a minor modification to a configuration file. Drivers for these devices are incorporated into the CAVE library, freeing the application programmer to concentrate on visualization code. The cthru application was also ported to use a Fakespace PUSH BOOM, a full color, wide field of view stereoscopic display. The CAVE library places two main restrictions on programs which use it: first, all visualization must occur within a single callback routine; and second, all data used by this routine should be kept in a shared memory area. The more advanced display devices supported by the library can be thought of as several display devices working together to create one unified image. During the initial CAVE library call, a separate process is spawned for each of these devices. These processes use the display callback to create a portion of the final image. By having the data in a shared memory area, the software mirrors the design of its target machine, the Silicon Graphics Onyx, which is a multiprocessing computer with up to 24 CPU’s sharing the system bus and main memory. In cthru, static and dynamic data are placed in separate structures; the shared memory area contains one copy of the former structure but two copies of the latter. The first copy contains the data which is currently being displayed, while new data from the extraction task is loaded into the second. Once a complete record has arrived, the display routine is

halted just long enough to swap the two buffers. This dual buffer strategy is necessary to avoid a noticeable delay as each new record arrives. Such a delay would have a detrimental effect on the real-time feel of the application. Similarly, if the image is updated too slowly it has a negative impact on the overall feel of the application. To avoid this, all processing which is not directly related to creating the image is pushed out of the display callback. For this reason a separate task is spawned at program startup to handle communication with the other parts of the system. The SOJ model outputs a time-varying three dimensional grid in which the x and y coordinates remain fixed but the z values change. At every timestep, a sea surface height field is computed, as well as a two dimensional velocity vector for each grid point. In cthru, the sea surface is shown as an undulating, lit surface which can be toggled on and off. Pathlines are used to visualize the velocity field. These are drawn as strings of beads which move about within the Sea. They are color coded by layer, and the distance between the balls on a string indicates the speed of the flow in that area. These pathlines are periodically restarted by the application. The PUSH BOOM implementation of cthru incorporated a transparent 3D texture positioned directly above the sea surface to more intuitively reveal variations in the sea surface height. A wireframe grid was applied to the sea floor to emphasize variations in the terrain. A portion of the GeoSphere image containing the SOJ and some of the surrounding area is used to provide a context for the data. The bottom topography of the area covered by the model is inset within this image, and the pathlines float about inside this space. The user can fly (or swim) around in this environment. A top down “You are here” type map is available, with an arrow icon indicating the current location and view direction. Sound effects (chopper noise, sonar pings, etc.) provide additional reinforcements of the current location within the overall space. Several pop-up menus allow the user to modify the model parameters and the state of the display. When parameters are changed, the model moves slowly from the old value to the new to avoid numerical instabilities in the solution process. Both the current and the desired value appear on the popup until they have converged. Controls are also provided to toggle the various maps and menus, reset to the initial location, and exit the program.

4 Conclusions and Future Work The traditional batch-oriented approach to visualizing computational oceanography results was largely dictated by computing and resource limits which no longer exist. With the right software and hardware, these results can now be viewed in real-time, giving the scientist a better understanding of the  Satellite Composite View of Earth R c 1990 Tom Van Sant/The GeoSphere

Project Santa Monica, CA 90402

model. Visualization techniques remain largely unchanged, although there are additional complications due to the fact that the solver and the visualization task now run in parallel. Software libraries are available to handle a significant part of this new burden and let the programmer concentrate on visualization routines. The cthru software was exhibited as part of the SuperComputing ’95 GII Testbed in San Diego, CA, using the CAVE, a Fakespace BOOM, and a workstation monitor for display. The software was run under various configurations, with the three modules being executed on computers both on and off the convention site. Much of this adaptability is a direct result of using the MPI and CAVE libraries. MPI provides a very rich set of routines for communicating between separate tasks. The current version of the library does not have support for task control, a deficiency which is being addressed in the latest round of the standards process. The CAVE library provides a single programming interface for a large number of advanced display devices. It unfortunately does not provide a good way of accepting input from the user, which forces everyone to write their own routines to do this. However, if one wants to program for the advanced display devices, then this is an acceptable limitation. In retrospect, the benefits derived from both libraries far outweighed any design restrictions or learning costs associated with using them. Further work needs to be done to transition the cthru software from a prototype which works with only the SOJ model to a more general package which can be used by all of the models developed by NRL. The usefulness of the package as an analytical tool would be greatly enhanced through the addition of other visualization techniques. Work is underway on adding isosurfaces and transparent surfaces and on enhancements to the existing pathlines. In the future, real-time visualization software should replace batch-oriented software as the standard way to view computational oceanography results.

5 Acknowledgments Thanks to Fakespace, Inc. for hardware and technical assistance with this project.

References [1] http://www.erc.msstate.edu/mpi/. [2] http://www.ncsa.uiuc.edu/evl/docs/html/homepage.html. [3] H. E. Hurlburt, Alan J. Wallcraft, Ziv Sirkes, and E. Joseph Metzger. Modeling of the global and pacific oceans: On the path to eddy-resolving ocean prediction. Oceanography, 5(1):9–18, 1992.

[4] Andreas Johannsen and Robert J. Moorhead II. AGP: Ocean model flow visualization. IEEE Computer Graphics and Applications, 15(4):28–33, July 1995.

Figure 1: Overview of the Sea of Japan. The dark blue sea surface undulates as new data arrives from the model.

Figure 3: View from the Sea floor. Red arrow on “You are here” map show current location and view direction.

Figure 2: Pathlines as bead strings for visualization of the ocean currents. Color indicates model layer.

Figure 4: Model controls panel. Red values slowly converge on black as the model runs.