Volume Visualization in VRML - CiteSeerX

12 downloads 654 Views 669KB Size Report
to extract an isosurface from the volume data on a dedicated server and transmit the resulting surface .... rent camera position or field settings. It implements the ...
Volume Visualization in VRML Johannes Behr Computer Graphics Center (ZGDV) [email protected]

Marc Alexa Technische Universit¨at Darmstadt [email protected]

Abstract Volume visualization has become an invaluable visualization tool. A wide variety of data sets coming from medical applications (e.g. MRI, CT or 3D ultrasound) or geological sensory information are represented as structured volume grids. In many application it is favorable to access the data sets from the net and explore the volume on a typical PC. Modern graphics hardware makes volume rendering at interactive rates possible. However, protocols for exchange of 3D graphics content such as VRML97 are not equipped to process volume data. This paper presents an approach using 2D/3D textures and standard rendering hardware, which allows real-time rendering of volume and polygonal data in VRML applications. The proposed environment enables the user to navigate through - and interact with - the VRML scene, combining volume and surface model data sets. Keywords: Volume Visualization, 3D Textures, Hybrid Rendering, VRML200x

1

Introduction

Volume rendering has become an integral part of nowaday’s visualization techniques. In many scientific fields and application areas real-world data is measured or generated as a volume and visualizing this data is the most important way of exploring it. Volume visualization techniques have advanced to a state that allows realtime display of volumes and user interaction even on moderate cost PCs. More generally, three dimensional graphics are everywhere these days. Among several ways of accessing 3D graphics content the web is particularly attractive, as it is easily accessible. On web sites, 3D graphics are delivered either through proprietary solutions or using international standards such as VRML97. It is desirable that these standards support volume data as a graphical primitive, which can be used just as other primitives. Typically, standards such as VRML offer the specification of three dimensional shapes only as boundary representations. Meshes are the most important and widespread representation, as they allow to specify arbitrary complex objects in a compact way. However, meshes are no replacement for volumes. While it is quite common to convert isosurfaces of a volume to a mesh the volume contains much more information. In many application (e.g. in the medical area) isosurfaces are not of interest, anyway.

Figure 1: Dynamic slicing for Volume Rendering

In this work, we present a concept and implementation for integrating volume rendering techniques in VRML or X3D. Specifically, 2D or 3D texturing is exploited for display of volume data at interactive rates. Only one additional node is proposed and the texture nodes are extended by one additional field. Both changes integrate seamlessly into the frameworks of VRML and X3D. The implementation is rather independent from the specification and will be discussed in detail. The result is a system that allows to render and browse through worlds comprising volumes and other objects such as meshes.

1.1

Related Work

There is an innumerable number of approaches to the visualization of volume data. For an overview we direct the interested reader to [3] or [10]. Our approach is based on hardware texturing (see section 2). Implementations of this approach can be found as commercial packages, namely, Volumizer of SGI [5] and an extension for Inventor by TGS [13]. To our knowledge, few approaches deal with volume rendering in a web environment. Particularly, only the problem of insufficient rendering speed on the client side has been addressed. The idea is to extract an isosurface from the volume data on a dedicated server and transmit the resulting surface as an indexed face set to a VRML browser on the client side [9, 11]. Another approach along the same lines is to render the volume on the server into an image and send this image to a client image viewer [1]. Our proposal is different, however, since it supports direct volume rendering on the client side assuming that modern graphics hardware is available. This leads to higher frame rates and interactivity but has the disadvantage that the typically massive amount of volume data has to be transmitted over the connecting infrastructure.

2

Volume Rendering using Textures

Many techniques for volume rendering have been developed, each of them with their specific benefits and shortcomings. Since the development of advanced graphics hardware [2] it quickly became clear that volume rendering could be done in hardware using texture mapping [4][7]. Nowadays, 2D and 3D texture support is part

Figure 2: Using various blending on one data set. The left images shows the “OVER” blending function on RGBA values, the middle image uses the same blending function on gray values, and the right image is generate by a maximum intensity projection (MIP) blending function.

of the OpenGL 1.2 standard [12] and fast implementations are available on a growing number of consumer (e.g. ATI Radeon, NVidea V20, 3DLabs GVX1) and high end graphic cards (e.g. Intense3D wildcat, HP fx10). Using texture mapping capabilities of graphics hardware results in volume rendering at almost interactive frame rates, which is hard to achieve using software. Of course, texturing techniques are somewhat limited with respect to the image quality. In most real-world applications, however, exploring a volume data set interactively is crucial. In the following sections we will briefly explain the concepts of volume rendering using texturing hardware.

2.1

Slicing

The basic idea is to slice the volume into a set of parallel surfaces. Technically, this is no different from re-sampling the volume according to the viewer’s position. Ideally, the slice surfaces are normal to the view ray, which implies that the surfaces should be parallel spherical shells. For large viewing distances the spherical shells can be sufficiently approximated by parallel planes perpendicular to the viewing direction. The spacing of surfaces should ideally be equidistant. In practice, however, voxels far away from the viewer are mostly invisible and it might be tolerable to have larger distances in these regions. For each slice a texture is computed, containing texel alpha values derived from the information contained in the volume. The slice is represented as a mesh. If slices are planes, they can be represented as polygon having three to six sides. These textured meshes are then rendered from back to front so that features close to the viewer are not obstructed by those farther away.

2.2

2D and 3D Textures

While 2D textures are supported in hardware by almost every modern graphics system, 3D textures were only available in higher end systems until now. Since 3D textures are part of the DirectX 8 specification today’s consumer hardare starts to support them. OpenGL 1.2 specifies 3D textures, however, implementations of the specification are not yet existing for all systems. In order to compute textured slices as explained above a plane (or tessellated spherical shell) has to be cut against the volume. If 3D texturing is available the mesh vertices inside the volume directly get the texture coordinate from their position. Additional co-

ordinates have to be computes where edges of the mesh or polygon leave the volume. For the typical case of using parallel planes only these three to six texture coordinates have to computed, completely specifying the textured slice. Note that volume data-sets are typically large in size and might not fit into texture memory of the graphics subsystem. In this case the volume is simply cut into ’bricks’, where each brick size must not exceed 50 percent of the available texture memory in order to upload a new brick while drawing the current one. If only 2D texturing is supported, cutting the slice against the volume cannot be done in hardware. In order to still get interactive frame rates, slices are restricted to be planes perpendicular to coordinate system axes. These slices are just subsets of the original volume data set and can be quickly restructured to be 2D textures.

2.3

Transfer Function

The values in a volume data set represent different properties of matter. Depending on the application, one wants to assign different transparency values to a voxel according to the original value in the data set. In this way, particular regions can be amplified or neglected. The transfer function is typically described using lookup tables, which assign each data value a transparency or color value. In special cases it can be useful to also use gradient information from the volume to specify the alpha value of a texture.

2.4

Blending

While rendering the slices from back to front transparent texels accumulate to a certain pixel color. By using different blending functions several effects can be achieved (see Figure 2). Specifically, the blending function defines how to add a new alpha textured slice to the set of already accumulated ones. If each new slice is linearly blended to the accumulated ones, then texels with high alpha values are most visible. If light is attenuated according to alpha values, the resulting image is comparable to X-ray image. By overriding old values with higher new values only the maximum intensity points are visible, which is useful to show areas with certain material properties. By windowing such functions in the data value domain parts corresponding to certain data values become visible.

3

Volume Rendering in VRML

In this section we present the extensions to VRML necessary to facilitate volume rendering. Further, the process of rendering boundary representation and volumes at the same time is explained in detail.

3.1

Proposed Extensions and Changes

Volumes are typically represented as structured three dimensional grids. The only type of structured regular grids supported by VRML 97 are image textures. It seems natural to extend the definition of an ’image’ to arbitrary dimensions in order to support volumes in VRML. Of course, this doesn’t lead directly to volume rendering support in VRML. A new node has to be introduced, which will provide means of slicing volumes into parallel textured polygon for rapid display of the volume.

3.2

Texture Node Extension

Texture maps are defined in VRML/X3D as a 2D coordinate system (s, t) ∈ [0, 1]2 . There are three node types available to specify the texture data. The ImageTexture node loads pixel-data from a given image file URL. The MovieTexture node defines an animated movie texture map and also loads the data from an external URL. The PixelTexture node defines a 2D image-based texture map as an explicit array of pixel values.

Figure 3: Animated volume of ultrasonic data

3.3

PROTO ImageTexture [ ... field SFBool ... ]{}

repeatR

TRUE

PROTO MovieTexture [ ... field SFBool ... ] {}

repeatR

TRUE

Only the PixelTexture node explicitly defines the texture dimension to be two. We extend the specification of the ImageTexture and MovieTexture node to hold 1D, 2D or 3D data by simply adding a repeatR field. The actual dimensions are declared inside the file which is referenced in the URL field. Therefore, the texture nodes can hold any isotropic regular grid independent of its dimension. The MovieTexture holds not just a single grid but a number of grids which can be used to define animated volume data sets. Just like 2D textures, a single grid value in the data set can have from one to four components. One component textures define an alpha or gray channel, two component a gray plus alpha channel, three values are interpreted as RGB color and four components define a RBG color plus alpha channel. There is no domain or system independent file format for 3D volume data. An extension exists for the TIFF[6] standard (i.e. 3DTiff [6]) and some formats are domain specific (e.g. dicom [8]). Our implementation provides support for a proprietary format as well as animated GIFs. Single frames of the animated GIF sequence are interpreted as slices in the (s,t,r) coordinate system. The definition of a generic volume file format is mandatory for a success of our proposal but is out of the scope of this work. The use of 3D grids does not necessarily demand 3D texture support in hardware. The texture node only holds the regular grid samples. The underlying rendering system can decide to re-sample the volume data to a 3D texture or to create various 2D texture slices.

The Volume Node

Shape nodes form the building blocks from which VRML worlds are created. Recall that each Shape node consists of two parts: the Appearance and Geometry. The Appearance node specifies the visual properties of the Geometry by defining the Material and especial the Texture node. The Volume Node that we propose is a specialization of the Geometry node. It dynamically generates slices according to the current camera position or field settings. It implements the techniques described in section 2. Since the Volume node only calculates and renders the polygonal slices, the node must have a reference to the actual volume grid data. Similar to a textured IndexedFaceSet, the Volume node utilizes the grid information from the Appearance settings of the parent Shape node. PROTO Volume exposedField exposedField exposedField exposedField exposedField exposedField ] {}

[ SFInt32 SFVec3f SFVec3f SFString MFFloat MFColor

sliceNumber sliceNormal size blending transAlpha transColor

0 0 0 0 1 1 1 ["OVER"] [] []

The Volume node fields control the slice generation (sliceNumber, sliceNormal, size) and set the blending (blending) and transfer (transAlpha, transColor) function. Following, the fields of the Volume Node are explained in detail: sliceNumber If set to to the default value (0) the system will choose the number of slices to be used for rendering. The system will set the slice spacing so that the texture sampling rate from slice to slice is equal to the texture sampling rate within each slice. Uniform sampling rate treats the texture texels as cubical voxels, which minimizes re-sampling artifacts. sliceNormal If set to the default value (0,0,0) the system will slice the volume data perpendicular to the viewing direction and

stacked along the direction of the view. A value different from (0,0,0) defines a constant normal vector for all slices in the local coordinate system.

0.9

size Defines the dimension in the local coordinate system. The values scale the (s,t,r) entries to define a volume domain in the surrounding scene.

0.7 0.6 0.5

blending Specify the blending function to be used. Our test implementation provides the following values: OVER Volume blended with the over operator approximates the flow of light through a transparent material. The transparency of each point in the material is determined by the value of the texels alpha channel texels with higher alpha values tend to obscure texels behind them, and stand out through the obscuring texels in front of them. UNDER Volume slices rendered front to back with the under operator give the same result as the over operator blending slices from back to front. MIP Maximum Intensity Projection (MIP) is used mainly in medical imaging. MIP finds the brightest texel alpha from all the texture slices at each pixel location. MIP is a contrast enhancing operator; structures with higher alpha values tend to stand out against the surrounding data. ATTENUATE The attenuate operator simulates an X-ray of the material. The texel’s alpha appears to attenuate light shining through the material along the view direction towards the viewer. The texel alpha channel models material density. transAlpha Defines the alpha values for the transfer function (see below). transColor Defines the color values for the transfer function (see below).

red green blue

0.8

0.4 0.3 0.2 0.1 0 0

0.2

0.4

0.6

0.8

1

Figure 4: Example transfer function

3.4

Hybrid Rendering

With the proposed extension it is easy to mix both geometric primitives (e.g. IndexedFaceSets) and volumetric data (Volume) in one Scene, a process referred to as hybrid rendering. For example, medical data of a head can be rendered volumetrically with a polygonal data-set for the attached skeleton (see Figure 5). As long as the objects do not intersect no special treatment of each of the shapes is necessary (besides sorting the transparent polygonal slices of the volume). If the objects do intersect two different cases have to be distinguished. The embedded surface model may be opaque or transparent. If the geometry is opaque, the volume is rendered with using the depth buffer in addition to rendering from back to front. If the geometry is transparent, it must be chopped into slabs and rendered along with the slices.

Defining the transfer function

4

In practice, the volume rendering method needs an alpha and color or alpha and grey value per voxel to perform correctly. If one channel is missing, the system has to be able to generate the missing channel in the grid data online. In general the transfer function might be applied for various purposes but in many applications we have to use it simply to generate a missing alpha channel. The transfer function is defined by the transAlpha and transColor fields, which are part of the Volume definition. They define how to transform between given volume density information and the color space. Raw volume data sets might use different intervals to encode density information. Typical cases include binary values, real numbers normalized to [0,1] or 8 bit types [0,256]. The alpha value in the transAlpha field has the range [0,1], therefore the system will scale the setting before applying the transfer function. The following example would lead to the transfer function shown in Figure 4

Our test implementation of the VRML97 Volume extension was conducted using the AVALON [14] system. Tests and images were generated on an IRIX Octane, Sun Ultra 10 and a standard PC equipped with a 3Dlabs-GVX graphics board. AVALON is a VR/AR system employing an extension of VRML97 as the application description language. The system is implemented in C++ and uses OpenGL to access the hardware rendering layer. Versions for all major platforms exist (IRIX/SunOS/Linux/Win32). AVALON was developed at the Computer Graphics Center (ZGDV) and supports various input devices (space-mouse, CyberGlove, Polhemus) and output channels (multi active/passive symmetric/asymmetric mono/stereo views) for desktop and immersive AR/VR application. The texture and volume extensions are implemented as native AVALON nodes in C++. The following table shows the frame rates achieved for different volume data sets rendered as 3D textures with and without hardware accelerated graphics.

Volume { ... transAlpha [ 0.0, 0.3, 0.305 0.6 1 ] transColor [ 0 0.1 0.2, 0.5 0.5 0.5, 0.8 0.6 0.5, 0.4 0 0, 0 0 0 ] ... }

Implementation

visible human 1283 voxels RGBA head 1283 voxels gray + alpha

SGI Octane MXE HW textures

P3 733 3DLabs GVX HW textures

Sun U10 Creator 3D SW textures

17.3 fps

13.2 fps

2.7 fps

17.8 fps

12.8 fps

3.2 fps

References [1] A. E. Kaufman and C. J. Pavlakos. PVR: High-performance Volume Rendering. IEEE Computational Science and Engineering, pp. 18-28., 1996. [2] Kurt Akeley. RealityEngine graphics. In James T. Kajiya, editor, Computer Graphics (SIGGRAPH ’93 Proceedings), volume 27, pages 109–116, August 1993. [3] Shaz Naqvi Barthold LichtenBelt, Randy Crane. Introduction to Volume Rendering. Hawlett-Packard Professional Books, 1 January 1998. [4] Brian Cabral, Nancy Cam, and Jim Foran. Accelerated volume rendering and tomographic reconstruction using texture mapping hardware. In Arie Kaufman and Wolfgang Krueger, editors, 1994 Symposium on Volume Visualization, pages 91– 98. ACM SIGGRAPH, October 1994. ISBN 0-89791-741-3. [5] Roni Yagel Chris Henn. Advanced Geometric Techniques for Ray Casting Volumes. SIGGRAPH, Course Notes, 1998. [6] Aldus Corporation and Microsoft Corporation. Tag image file format (TIFF) specification revision 5.0. Technical report, Aldus Corporation, 411 First Avenue South, Suite 200, Seattle, WA 98104, Tel: (206) 622-5500, and Microsoft Corporation, 16011 NE 36th Way, Box 97017, Redmond, WA 98073-9717, Tel: (206) 882-8080, August 8 1988. Figure 5: Hybrid rendering (mixing surface and volume data)

5

Conclusions

We have presented a framework for integration of volume rendering into VRML200x/X3D. The fact that standard graphics hardware can be exploited to render volumes at interactive rates makes volume visualization an important part of Web3D techniques. Using texturing for volume rendering brings the possibility of combined display of volumes and surface models. Once volume data sets are integrated into VRML or X3D it is desirable to also extend texture coordinates to three components and exploit the capabilities of most modern graphics systems. Standard hardware can also be used to clip the volume model for interactive exploration. Nodes for the definition of clipping planes are already part of the AVALON system, however, we have found clipping to be out of the scope of this paper. While we use only two specific techniques for volume rendering, the node definition does not imply specific methods. As future work, the underlying volume rendering techniques could be extended to make higher image quality possible.

6

Acknowledgements

We thank Georgios Sakas and Gerrit Voss for providing static and dynamic volume models. Moreover we would like to thank them for their very valuable comments and support of this work.

[7] Timothy J. Cullip and Ulrich Neumann. Accelerating volume reconstruction with 3D texture hardware. Technical Report TR93-027, Department of Computer Science, University of North Carolina - Chapel Hill, May 1 1994. Wed, 26 Jun 1996 18:24:21 GMT. [8] DICOM Spec. http://medical.nema.org/, 10 October 2000. [9] T. T. Elvins and R. Jain. Web-based volumetric data retrieval. In David R. Nadeau and John L. Moreland, editors, 1995 Symposium on the Virtual Reality Modeling Language, VRML ’95, San Diego, California, December 14–15, 1995, pages 7–12, New York, NY, USA, 1996. ACM Press. [10] T. Todd Elvins. A survey of algorithms for volume visualization. Computer Graphics, 26(3):194–201, August 1992. [11] Klaus D. Engel, R¨udiger Westermann, and Thomas Ertl. Isosurface extraction techniques for web-based volume visualization. In David Ebert, Markus Gross, and Bernd Hamann, editors, IEEE Visualization ’99, pages 139–146, San Francisco, 1999. IEEE. [12] Kurt Akeley Mark Segal. The OpenGL Graphics System: A Specification. Silicon Graphics, Inc., 1 April 1999. [13] TGS. Volume rendering for Open Inventor and 3DMasterSuite. http://www.tgs.com/3DMS/vol render.htm, 10 October 2000. [14] ZGDV. AVALON -Utilizing VRML for immersive VR/AR enviroments. http://www.zgdv.de/ avalon.