Signal Processing Aspects of Scientific Visualization


Robert J. Moorhead II and Zhifan Zhu

(rjm@erc.msstate.edu ,mailto:zhu@erc.msstate.edu)

Scientific Visualization is the mapping of scientific data and information to imagery to gain understanding or insight. The signal processing aspects of the mapping process are often underestimated. Issues such as sampling rate,reconstruction filters, the human visual system, etc. have significant effects on data analysis and presentation. Data analysis and presentation involve transforming numeric representations into imagery and viewing that imagery via either a softcopy device (e.g., a cathode ray tube or CRT) or a hardcopy device (e.g., printer output, 35mm film, videotape, etc.). In performing these transformations and presentations, a number of signal processing issues should be considered. Aliasing and filtering [5, 6, 7, 36] are important issues sincemost data that are visualized are sampled data, discrete in space, time, and value. Color theory and models [44, 45] are important since color is often used to represent functional value. Due to the often massive quantity of information that needs to be visualized, automatic feature extraction [37, 50, 51, 60, 61, 62], data compression [4, 56], and multiresolution visualization [38, 53, 54] are some of the most active areas of signal-processing-oriented scientific visualization research today, especially in the areas of multivariate and flow visualization [24, 25, 26]. In addition, recent publications [33, 55] have shown a way to use the Fourier Projection-Slice Theorem to generate images of volume data faster.

The goal of this paper is to encourage the signal processing community to address the needs of the scientific visualization community. To aid in this effort, we first explain the visualization process. Then we describe two signal processing issues -- sampling and color space selection -- that arise in various visualization techniques. Next, we provide a survey of some of the various visualization techniques, emphasizing the difference in visualizing time-invariant and time-variant data. Finally two visualization applications will be described in detail to exemplify the signal processing aspects of scientific visualization.

(To access the glossary for terms and abbreviations contained herein, please click here.)

The Visualization Process

The visualization process usually involves creating geometric objects (e.g.,points, lines, and polygons) from a set of discrete values at a finite number of locations in 3D space. These geometric objects are then rendered into one or more images. In some visualization techniques, the data are directly mapped into imagery, bypassing the intermediate step of mapping the data into one or more geometric representations.

Mapping Numbers to Imagery

Shape

To explore a dataset, a scientist may map the data in a number of differentways. One mapping is to have the functional values (temperature, pressure, humidity, salinity, density, velocity, stress, etc.) determine the shape of an object. In Fig. 1 is an image depicting sea surface height (SSH) in which the height is represented by the amount the plane representing mean sea level is deformed.


Figure 1. Sea surface height shown using surface deformation.




Color

Another mapping that is often used is mapping different values to differentcolors. A number of color mappings have been previously recommended [30, 44, 45]. There are actually two classes of colormaps: shading and functional. In shading colormaps, the colors are determined by the lighting, the surface properties, and the relationship between the light(s) and the surface. Shading colormaps are most frequently used to help visualize surface shape (one is used in Fig. 1, for example). In functional colormaps, the color at each point is determined by mapping functional values (pressure, temperature, velocity components, etc.) into colors.

In Fig. 2, we visualize the fuselage of a small plane with a shading map on the left and a functional colormap on the right. The colors in the functional colormap are determined by the computed pressure. Note the specular highlights along the sharp curves on the side with the shading map; note the high pressures along the sharp curves on the side with the shading map; note the highpressures along the nose cone and the windshield on the side with the functionmap.


Figure 2.The fuselage of a small plane with a shading map on the left and afunctional colormap on the right. The colors in the functional colormap are determined by the computed pressure. Note the specular highlights along the sharp curves on the side with the shading map; note the high pressures along the nose cone and the windshield on the side with the function map.


Other examples are shown in Fig. 3, where we visualize SSH using the indicated functional colormap. In Fig.4, a plane is deformed based on the SSH and (redundantly) colored with the same colors as in Fig. 3 to simultaneously showtwo ways of visualizing a scalar value.

No one colormap meets all needs. Different colormaps have different advantages. A mapping with a smoothly varying transfer function gives the viewer an impression of the global variation in values, but does not allow the viewer to see small variations. A mapping with sharp transition regions emphasizes certainsmall variations, e.g., the difference in a sea surface height of 9.9 and 10.0. Although humans can distinguish between hundreds of thousands of colors pairwise (only two colors side-by-side), within an image with many colors, the human visual system has difficulty distinguishing more than about 30,000 colors, which means variations of less than are imperceptible or that 16-bit numbers could be used to represent all the color resolution that can be seen. Colormaps can be viewed much like floating pointnumberformats. The product of the range and the precision is fixed. To see a large range you have to give up precision, and vice versa. To increase the functional resolution of a colormap, i.e., the ability to "see" small differences in functional values, the range of functional values mapped by the colormap must be limited.


Figure 3.Sea surface height shown using various colors.


Figure 4.Sea surface height shown using various colors and surface deformation.

Opacity

For data defined in 3D space, a third mapping is possible. The functional values can be thought of as representing the local density within the volume. An image is formed by projecting a light through the volume and creating an image on a surface on the opposite side of the volume in much the way an X-rayimage is formed (see Fig. 27). Usually the opacity mapping is proportional; i.e., the larger the function values along the traversed path the more the light is attenuated.

Exploratory versus Presentation Visualization

There are 2 flavors of visualization: passive (presentation), usually done via hardcopy and active (exploratory or interactive), usually done via softcopy. In interactive visualization, the user can interactively change his view of the data, i.e., move through the data, rotate one or more objects, change the colormap(s), or move a light source around to simulate the way one would explore a new environment or object. To interactively visualize a time-varying 3D dataset, the computer upon which the data are visualized needs a large main memory to hold the dataset and a graphics subsystem which will map world coordinates (e.g., meters) into vertices of geometric primitives (e.g., lines or polygons).It must also be able to rotate, translate, and zoom that set of geometric primitives, and to render those geometric primitives into an image [15, chapter 18].

In passively viewing a changing environment, the viewer lacks the ability to "steer" the changes in the environment and rendering speed is much less important. Many scientists have visualization experts or technicians create scientificanimations for them. The scientist then analyzes the resulting animationsby viewing a video recording. Many signal processing issues are involved in this process. Spatial filtering is needed to convert the computer-generated imagery to the nominal 512 x 480 spatial resolution of video. The format conversion from the three color plane (red, green, and blue) representation used in the computer framebuffer to the composite coding scheme of video involves color space conversion and spatio-temporal filtering and interpolation.

The development and widespread use of the MPEG-2 standard [42] and HDTV technology [11] will do much to alleviate the image fidelity limitations (relatively limited spatial resolution, small color gamut, cross color, cross luminance, interline and large-area flicker, etc.) of the present video standards. These and other signal processing issues will be addressed in more detail in the following sections.

Signal Processing Issues

Aliasing is the most common signal processing problem in visualization. Toexplore a large dataset, a scientist will often subsample the data volume. It is not unusual for a scientist to begin to visually analyze large datasets by considering only every tenth data point. This subsampling usually introduces significant visual artifacts (e.g., mislocated edges, false contours, etc.)into the imagery that mislead the scientist. Although most signal processing experts understand the potential problem, many users of scientific visualization toolkits [2, 9] and techniques are unaware of signal processing issues. Another simple data reduction technique that is often used is to only consider asmall section of the total volume.One of the reasons signal processing has to be considered in visualizing scientific data is that the presentation medium is usually a sampled surface, e.g., a CRT.

The essence of the problem can be seen in the simplest of graphical generation functions: line drawing. In Fig. 5, a line is drawn between location (2,2) and (10,7) in as continuous a fashion as the printing process for this magazine will allow, which is on the order of 300 dots per printed inch. In Figure 6, a line is drawn between (2,2) and (10,7) by highlighting all the pixelsthrough which the continuous line would pass; this produces a poorly represented line. By using a better line drawing algorithm and specifying gradations in intensity for neighboring pixels (an anti-aliasing technique), we can better represent the line on a CRT, as shown in Fig. 7. For more details on various anti-aliasing techniques, see [15, ch. 3].


Figure 5.A continuous line drawn between (2,2) and (10,7).



Figure 6. A discrete line drawn between (2,2) and (10,7)



Figure 7.An antialiased line drawn between (2,2) and (10,7) using a one-pixel wide boxfilter. The intensity of each pixel is proportional to the area covered. On many devices, the pixels would actually overlap.


An example of the visual artefacts that can occur in higher dimensional spaces can be seen when visualizing the function f(x,y) = sin(x)cos(y), where thex and y domain is , as shown in Fig. 8. Color is used to help elucidate the 3D shape of the surface. The minimum functional value (-1) is shown in orange and the maximum value (1) in red; the same colormap is used in Figs. 8-11. From Nyquist's theorem it is known that there must be at least 8 samples over the domain to accurately reconstruct the function sin(x) or cos(y).To accurately reconstruct the function sin(x)cos(y) as a surface from uniform spaced samples requires at least 8 samples in each dimension, located at in the x direction and in the y direction, where N=0,1,.....7.




Figure 8. f(x,y) = sin(x)cos(y) evaluated over the interval in both dimensions and supersampled at 200x200 to give a good approximation to the continuous function.


In Fig. 9 and Fig. 10, we show four visualizations of the sampled function. In Fig. 9, linear interpolation (a triangular filter) is used between samples to reconstruct the surface; this is the type of interpolation that is most often used in exploratory visualization since it is the only interpolation implemented in hardware in most graphics subsystems. In Fig. 10, the optimal reconstruction filter, a sinc filter, is used. The intent in showing the reconstruction obtained with both filters is to show the difference in what is normally done in exploratory visualization and what could be done if the optimal reconstruction filter were used. When only three samples are taken in each dimension at 0, , and , a surface with only two peaks instead of 32 peaks is created (Fig. 9a and Fig. 10a). With a 6x6 sampling at , where N = 0, 1, 2, 3, 4, or 5 (Fig. 9b and Fig. 10b), the reconstructed surface undulates, but it is still a poor approximation of the original surface.

The reconstructed surface using linear interpolation with a 9x9 sampling isshown in Fig. 9c. The sampled functional values are shown in Table 1 to an accuracy of two decimal digits. One might first be surprised at the appearance of Fig. 9c, since it was noted an 8x8 sampling was sufficient.



One problem is rendering non-planar quadrilaterals which have two possible triangulations. If the "correct" points are connected, a surface with 32 peaksand intervening bottoms can be created (see Fig 11a), but it is just as easy to create an "incorrect" one (Fig. 9c is repeated as Fig. 11b for ease of comparison) [48]. Take, for example, the following section of the 9x9 sampled function:





In Fig. 9a - 9d, the triangulation is as in the figure below:


which creates long "ridges" and "valleys" along the diagonals. If instead, the triangulation is alternated, like the figures below,




the peaks and bottoms are apparent (Fig 11a). Fig. 10c shows the improvement when a sinc filter is used.

In Figs. 9d and 10d, the surface is reconstructed using samples at every in both dimensions. Since the sampling rate is twice the Nyquist rate, the correctnumber of peaks is seen even when the surface is reconstructed using only a triangular filter. This example shows a problem which frequently occurs in exploratory scientific visualization. To gain interactivity, a user will reconstruct a surface using only some fraction of the total dataset. Features may be missed or artifacts may be perceived as features.


Figure 9.Reconstruction of sin(x)cos(y) from four different samplings using linear interpolation (a triangular filter).



Figure 10.Reconstruction of sin(x)cos(y) from four different samplings using optimal interpolation (a sinc filter).



Figure 11.Visual difference in two triangulation schemes.


Color Models

With the widespread use of computer monitors capable of displaying images with 256 shades of red, green, and blue, respectively, color has become a common and effective tool in visualization. In scientific visualization, differentapplications or techniques utilize different color spaces (models). Since most sensors and displays are based on red, green, and blue filters or phosphors, the techniques most closely associated with input and output -- e.g., histogram equalization and image compositing -- tend to be based on the RGB color model [15]. The hue/saturation/value (HSV) color space is often used in direct volume visualization and other mapping-oriented algorithms, since it is a reasonable model of the perceptual level of the human visual system. The colormap used in Figs. 8-11, for example, is the hue range of the HSV color space, with V (brightness) and S fixed at their maximum value. Other color models, such as YUV or YCrCb -- both of which are luminance (Y) plus two chrominance difference signals and are the color models used in television standards -- are often the representation used when performing image compression. For more information on color models, see [15].

Summary

There are many other signal processing issues in scientific visualization. Rather than illustrate them with contrived examples, we now describe some visualization techniques, emphasizing the signal processing aspects in each. A delineation is made between time-invariant visualization and time-variant visualizaiton to emphasize the additional siganl processing problems that can occur when the data to be visualized are time-varying.


Visualization of Time-Invariant 2D Data


Although the major emphasis in scientific visualization is on 3D data, developing better 2D visualization techniques remains an active area of research and development. The major distinction between image processing and 2D time-invariant scientific visualization is in the sampling density. In image processing, functional values sampled on a regular rectilinear grid are mapped one-to-one onto pixels; in 2D scientific visualization the number of data values is usually much less than the number of pixels in the resulting image, which motivates many of the signal processing issues like sampling, interpolation, and reconstruction. Many of the techniques for higher dimensional spaces are extensions of the techniques developed for 2D data. Although the domain is a planar surface in the following discussion, most of the techniques easily extend to data sampled on a curved surface.

Scalar Visualization

Contours

One of the most basic scientific visualization techniques is the generationof a set of curves within a plane which represent locations of constant functional value ("isocontours" or "contours"). In Fig. 12 a set of functional values are extracted from the lower left of Table 1 and x's are placed on the sampled surface where the functional value of 0.50 would occur based on linear interpolation between the sample values. In Fig. 13a-13d the x's are connected in 4 ofthe 16 possible ways; the correct contouring is unclear since there are saddle points due to the non-planar quadrilaterals formed by adjacent vertices. This is the same class of problem as shown in Fig. 9c. Sabin [46] presents a number of techniques that can be used to resolve the ambiguity. A common method is to split the cell into four triangles by connecting opposite vertices, set the value at the center of the cell to the average of the values at the four vertices, find the intersections along each triangle edge, and connect the points of intersection in the obvious way, which will be unambiguous. This is shown in Fig. 14a for one of the ambiguous cells. Applying this technique to the data in Fig. 12 produces the contours in Fig. 14b, which have twice the number of linear segments per contour as the ones in Fig. 13a. Even more elaborate reconstruction techniques [15] can be employed.


Figure 12.A 2D range of functional values extracted from the lower left of Table 1 with x's placed on the sampled surface where the functional value of 0.50 would occur based of linear interploation between the sample values.



Figure 13.Four of sixteen possible connections (contours) of the interpolated functional values in Fig.12.




Figure14.(a) An example of how to determine the correct contour connectivity in a cell with a saddle point. The cell is split into four triangles by connecting opposite vertices, the value at the center of the cell is set to the average of the values at the four vertices, the intersections arefound along each triangle edge, and the points of intersections are connected in the obvious way, which is unambiguous. (b) Application of the technique to the data in Fig.12.



Shading Techniques

To create an image of N pixels from a 2D sample set with much less than N values requires some type of interpolation. A number of such techniques exist and two are shown in Fig. 15. In Fig. 15a is a set of 4 sample points that represent the vertices of a quadrilateral. The functional values at these 4 sample points can be mapped in a number of ways, one of which is via a color map. The issue then becomes how to color the pixels between the pixels associated with the vertices.

The color map in Figure 15b shows what color is used to represent each functional value. Figure 15c is an example of flat shading; all the interior pixels are one color. The quadrilateral is split into twotriangles for two reasons: first, triangles are inherently planar, which makesrendering (shading) easier, and second, triangles are inherently convex, which makes clipping easier. There has to be some systematic method for selecting which vertex to use to select the functional value; here the leftmost vertex on the lowest line is used. Figure 15d shows the resulting quadrilateral. Figure 15e and f show Gouraud shading, an example of bilinear interpolation.



Figure 15. (a) A quadrilateral with functional values defined only at the vertices. (b) A colormap used to map functional values to colors. (c) Triangulation used for flat shading. (d) The flat-shaded quadrilateral with the functional value used to color each triangle overlayed . (e) Interpolation used for Gouraud shading. (f) The Gouraud-shaded quadrilateral.


The magnitude of the current in the Persian Gulf is rendered in Fig. 16 using flat and Gouraud shading. Flat shading allows a computational scientist to see the values within each cell or for each sample area. Gouraud shading produces a smoother picture and thus "looks better." In classical signal processing parlance, these two shading techniques are just reconstruction with a box and a triangle filter respectively. Since more elaborate reconstruction techniques are not supported in hardware, they are seldom used for interactive visualization. However, in creating presentation visualizations, for which image quality is more important than rendering speed, higher order reconstruction and shading techniques [15] are used.




Figure 16. Current magnitude in the Persian Gulf shown with flat shading and Gouraud shading.


Multivariate (Vector) Visualization

Arrows

Multivariate visualization involves visualizing datasets with multiple functional values at each sample point; the most common task is vector field visualization. One way to visualize a vector quantity is to use geometric forms such as arrowsor tufts [40, 57] (see Fig 17). The direction is determined by thevector sum of the components and the length of the arrow is scaled by the maximum magnitude within the field. Numerous problems occur with using geometric forms. First, since the primitive must cover multiple pixels, there is significant visual ambiguity as to which point the arrow represents; is it the head, the tail, or somewhere in between? Although the arrow usually represents the functional value at the tail, visually that is hard to perceive, unless the underlying grid structure is overlaid as in Fig. 17.

Another problem is the clutter on the screen, since each arrow takes up so much screen space. Note that if we reduce the "directional ambiguity" by adding arrow heads, we increase the clutter. This leads to undersampling of the space, i.e., the visualization technique is not optimum in mapping the maximum amount of information into the image. Vector fields with wide dynamic ranges inmagnitude and with highly turbulent flow are especially difficult. Since arrows are not spherically symmetric, the allowable sampling density varies in each dimension, even on a Cartesian grid. For example, if the x component of thevectors is much larger than the y component, the sampling density in the x direction must be much less than in the y direction to prevent overlap. Also significant visual distortion occurs when the projection (the mapping of the 3D information into a 2D image) is not orthogonal. This can be seen in Fig. 18, in which the 2D current within a constant-density layer of the ocean is visualized using arrows on a surface which is projected obliquely along with the associated ocean bottom.



Figure 17. 2D vectors shown using arrows on a plane. The lengths of the arrows are proportional to the magnitude of the vector. Excessively large values are clipped and extremely small values are eliminated. The underlying grid structure is drawn to provide a point of reference as to which way the vector is oriented. Notice how cluttered the image is.



Figure 18. The 2D current within a layer of the ocean visualized using arrows on a surface which is projected obliquely along with the associated ocean bottom. Note the significant visual distortion due to the oblique projection.




"Colorwheel"

By analyzing the traditional arrow technique from a signal processing perspective of sampling density and by viewing the visualization as the interpolationprocess it is, we can develop a higher resolution 2D vector visualization technique. Examples of a higher resolution scheme are shown in Fig. 19 and Fig. 20, in which the surface current (a 2D vector field) in the North Pacific is shown. The technique has been given the name "colorwheel" [28, 29]. In this mapping of vector data, the HSV color space is used. The direction of the vector is represented by the hue and the magnitude of the vector is redundantly mapped to both value and saturation since the human visual system has a hard time distinguishing between changes in value and changes in saturation over most of the HSV color space.

In this mapping, each pixel can represent a different vector value. It hasbeen found that a log-scale mapping of vector magnitude to saturation/value works well, given the human visual system's nonlinear reception [27]. This is the difference between Fig. 19 (linear map) and Fig. 20 (log map). Fig 21 shows the same area as Fig. 17 using the colorwheel vector vialization technique.

Vortices are much easier to see. The functional value resolution, i.e., the ability to see the variances over space as well as over the range of functional values on displays like 24-bit CRTs and most color printers,is much better with the colorwheel technique.


Figure 19. Ocean current in the NE Pacific (a 2D vector field) is shown using color in lieu of geometric primitives. In this mapping, the HSV color space is used. The direction of the vector is represented by the hue and the magnitude of the vector is redundantly mapped to both value and saturation since the human visual system has a hard time distinguishing between changes in value and changes in saturation over most of the HSV color space. The brown area is the landmass.



Figure 20. Same picture as Fig. 19, except that: 1) the smaller vector magnitudes are not shown and 2) a log-scale mapping of vector magnitude to saturation/value is used to better match the human visual system's perception.



Figure 21. The same picture as Fig. 17, except the vectors are shown using the colorwheel technique instead of arrows.


Streamlines

A common way to obtain a more global view of steady flow is to create streamlines, which are lines everywhere tangential to the velocity field. Streamlines can be started anywhere within the field. The creation process is a curve fitting problem and the determination of the consecutive points on the curve necessitates consideration of the Nyquist sampling rate and interpolation theory. Basically, there should be two points on the curve in each cell. Bilinear interpolation is frequently used for structured grids, while scattered data interpolation techniques, e.g., Hardy's multiquadric method [17] is usually used for irregular grids. The curve generation is often based on fourth order Runge-Kutta methods, but even more accurate methods exist [19, 40]. There is a speed-accuracy tradeoff in streamline determination.

Line Interval Convolution

A recently presented technique for visualizing 2D vector fields based on signal processing concepts is called Line Interval Convolution (LIC) [10, 16]. The LIC algorithm combines a vector field sampled on a uniform rectilinear grid [10] (or a structured curvilinear grid [16]) with a texture map image (e.g., a random noise field) of the same dimension as the grid to produce an image in which the texture has been "locally blurred" by the vector field. The pixels in the output image are produced by the one-dimensional convolution of the texture pixels along a streamline (see Fig. 22) with a filter kernel:



where is a set of grid cells along the streamline, is input texture pixel at grid cell p and

where is the arclength of the streamline from the point (i,j) to where the streamline enters cell is the arclength of the streamline from the point (i,j) to where the streamline exits cell p, and k(w) is the convolution filter function (usually a box filter).

Therefore each pixel in the output image is a weighted average of all the pixels corresponding to the grid cells along any streamline which passes through the grid cell associated with that pixel. Streamlines are initiated from every grid cell. The output image is a smeared version of the input texture map, where the direction of smearing is determined by the vector field. Figure 23 is an example of an LIC image; unfortunately, it needs to be viewed as part of an animation to fully appreciate the value of the LIC technique. An obviousignal processing issue is determining the best filter or convolution kernel.






Figure 23. Flow visualization of the ocean current in the Northeast Pacific using the Line Interval Convolution technique. Red indicates areas of high magnitude flowand blue indicates areas of low magnitude flow.


Critical Point Analysis

Yet another common method of visualizing 2D flow was presented in [20]. Itconnects critical points to display the 2D vector field topology. By connecting the critical points -- i.e., the points at which the vector magnitude vanishes -- with curves, a skeleton can be defined that characterizes the whole 2D vector field. Icons indicate the type of critical points: attracting/repelling foci, attracting/repelling nodes, saddle points, and centers. Examples of theseicons and the classification criteria are shown in Fig. 24. Images showing only the topology of the vector field eliminate most of the redundant information. Critical point analysis is in effect a feature extraction technique and willbe used in the applications section in a feature detection algorithm and laterto show the effects of multiresolution analysis.



Figure 24. Example icons and classification criteria for critical points. R1 and R2 denote the real part of the eigenvalues of the Jacobian; I1 and I2 the imaginary parts [20].




Visualization of Time-Invariant 3D Data

Visualizing 2D data is rather straightforward, in that the domain is easily conceived as a flat surface with the functional values mapped into geometric primitives, color, etc.. The domain is simply scaled into the surface area. There is no reduction in dimensions in going from the model domain to the image domain, so the relationship between in the model domain and in device coordinates is easily understood. Visualizing 3D data when the viewing device is a 2D surface is much more challenging. Since prospective projection of a data volume onto a flat surface is ambiguous, it is hard to tell relative distances and mentally perform inverse mappings from device coordinates back into model coordinates. Although there are many visual cues that can help (depth cueing, obscuration, lighting and shading, shadows, perspective projections, projection lines, contextual clues), they often do not give a definitive inverse mapping. The best visual cue is usually motion; by having the ability to move the volume about or to move within the volume, the user is able to get a much better understanding of the data volume.

Other ways to animate the data include varying a threshold, changing the colormap, changing the slice of data that is visualized, etc. Unfortunately it is difficult to show examples of "animated" data volumes adequately in a journal article. Thus, the discussion thatfollows will only address visualization of time-invariant 3D data using static images.

Scalar Data

The process of visualizing data sampled in three-space is usually called volume visualization or volume rendering. Westover [59] has delineated the major signal processing issues in scalar volume visualization. However, most visualization implementations -- including Westover's -- take shortcuts for simplicity or speed. Elvins [14] provides an excellent summary of the many scalar volume visualization algorithms, dividing them as most people do into two categories: surface fitting and direct volume rendering (DVR).

Surface Fitting

Surface fitting algorithms are basically the 3D extension of contours in two space. One of the most frequently used techniques is called the Marching Cubes (MC) method [31]. It marches through a volume voxel-by-voxel locating andconnecting all points with a particular user-defined value. The "locations and connections" form the vertices and edges of triangles which in turn form a tessellated surface.

An example of such an isosurface is shown in Figure 25; the isosurface is generated by connecting all the points with a certain density value. The anatomical feature is readily apparent. In its original incarnation, the MC algorithm had the problem of potentially producing holes in the surface due to the saddle point problem mentioned earlier in the discussion on contours. Fig. 26a shows part of the skull isosurface with some visibleholes. Nielson and Hamann [39] provide one solution to the ambiguity problem; the improved result (a continuous surface) is shown in Fig. 26b. From a signal processing point of view, the ambiguous cube is undersampled. By considering adjacent data and making smoothness assumptions, we can guarantee a closed surface.


Figure 25. An isosurface extracted from a volume of 64 x 64 x 64 density values. The surface is created connecting points with equal functional value, in this case, intending to represent the hard structures within the cranium.


Figure 26. (a) An isosurface with holes in it. (b) An isosurface without holes in it.




Direct Volume Rendering

DVR algorithms [14, 59] map the data in the volume directly into an image; there are no intermediate geometric primitives as there are in surface fitting.There are two methods of DVR: feed-backwards, in which the mapping is from the image plane into the data, and feed-forward, in which the mapping is from each voxel into the image plane. Sampling rays are projected through the datavolume and are either accumulated or attenuated, depending on the approach. As it traverses the volume, a ray's properties, such as color and opacity, are modified based on the data classification to produce the image. A schematic diagram of the volume rendering process is shown in Fig 27.


As Westover has stated [59], "signal processing forms the basis of volume rendering because volume rendering involves the reconstruction of the input data and a resampling to generate a discrete image." The ideal DVR process from a signal processing perspective would: (1) Reconstruct the continuous volume function from the input samples by convolving the samples with a properly chosen reconstruction filter kernel; (2) Transform this continuous function into image space; (3) Shade the continuous function; (4) Filter the continuous shaded function with a low-pass filter to lower the signal's maximum frequency so that it is below one half the image resampling rate;(5) Sample the function at a rate corresponding to image resolution and calculate the visibility function to generate the image." [59]

The filtering and resampling steps are necessary since the shading process can spread the signal's spectrum considerably. The scientist typically uses three levels of DVR to analyze the data. Usually the first visualizations are simply point-samples for quick (aliased) preview, to determine the orientation of the data. The first refinement is to use inexpensive (linear) reconstruction kernels which produce perceptually continuous images, but work in near realtime. Ultimately a non-interactive technique with high-quality shaders and a cubic reconstruction kernel is used to render a more physically and mathematically accurate image.

Figure 28 shows two DVRs from a volume of density values. In both parts, the opacity and saturation of each voxel is directly proportional to the magnitude of the density value in the voxel. In addition, the hue is determined from the HSV [15] color space by mapping the range of density values into angles from 0 to 360 degrees such that the smallest density values are red and the largest density values are blue-violet.

In Figure 28a, the density values are assumed to be randomly distributed within the voxel and mapped as point entities directly to the screen. If the density values are assumed to be in the center of the voxel adn mapped as point entities directly to the screen, moire patterns occur.

In Figure 28b, Westover's splatting algorithm [59]is used to generate a more continuous image.


















Figure 28. Two DVRs from a volume of density values. In both parts the opacity and saturation of the voxel is directly proportional to the magnitude of the density value in the voxel, and the hue is determined from the HSV [15] color space by mapping the range of density values into angles from 0 to 360 degrees such that the smallest density values are red and the largest density values are blue-violet. In (a), the density values are assumed to be randomly distributed within the voxeland mapped as point entities directly to the screen. In (b), Westover's splatting algorithm [59] is used to generate a more continuous image. The tradeoff is between time and image quality.




Frequency Domain Volume Rendering

Malzbender [33] and Totsuka and Levoy [55] have recently shown a way to use the Fourier Projection-Slice Theorem to generate images of volume data faster than with the spatial domain approaches. Exploiting this theoremallows an image to be obtained for any view direction and orientation by extracting a 2D plane of data from a 3D Fourier-transformed volume of data and inverse transforming just the slice of data. This technique, frequency domain volume rendering (or Fourier volume rendering), allows an image (projection) to be generated in O time for a volume of size .

The technique is not without its limitations and shortcomings, however. The projections lack depth information much as x-ray images do (although [55] presents frequency domain lighting and depth cueing techniques) and the technique requires significant interpolation and reconstruction computations. However, there are some advantages. Progressive refinement to the final image (see Fig. 28) is more straightforward than with spatial domain volume rendering techniques and spatial filtering is simple to implement as a frequency domain process.

Data Sampled on an Irregular Grid

Although processing data on regular rectilinear grids is more straight forward, in many situations the data originates on an irregular grid. Slightly different algorithms are required for visualization. In general the techniques tend to be slower, but more accurate than those for structured grids, since the basic geometric entity in the grid is a triangle in 2D and a tetrahedron in 3D. Often data on 3D irregular grids are interpolated onto slicing planes for visual analysis. One often used and simple volume rendering technique is a point cloud (see Fig. 28a), in which functional values are mapped into a colormap, positioned at their location in 3D space, and projected onto a view plane. Since the data is not regularly sampled, moire patterns do not occur. Motion (reprojection from a different direction) allows the user to see the 3D structure of the data. Few other DVR techniques exist for data sampled on an irregular grid [14]. Vector data sampled on irregular grids is easily visualized as 3D arrows or 3D streamlines.

Vector Data

Visualizing 3D vector data is an active research area [10, 16, 18, 24, 25, 26]. The major problem is the difficulty of showing global and local values. e.g., a vortex on one scale is a current or a wave on a more highly-resolved sampling structure. Projecting magnitude and direction from 3D space into one uncluttered image is a difficult problem.

Some Common Techniques

Techniques that advect smoke, clouds, and flow volumes [32, 34, 35] do a good job of showing the flow at a global level, but not at a detailed level. Particle-based [22, 58] techniques show local flow properties well, but don't doa good job of showing the big picture. Helman and Hesselink [21] have extended the vector field topology techniques to 3D; unfortunately, the approach is not capable of producing as complete a view in 3D space as it is in 2D space.

Color Sphere

Hall [18] has developed a technique which visualizes 3D vectors using perceptually-based color spaces. As with the colorwheel technique [28, 29], the technique automatically emphasizes and classifies critical points. Focuses and centers are rotated slices from the color space. The biggest advantages are that (1) color is invariant under projection, and (2) high sampling rates can be used. The principal drawbacks of the color-based representations are the lack of a standard mapping from vector-space to color-space, and user inexperience. The first drawback can be mitigated by showing the color space used; the second will only be overcome with use. Hall [18] proposes using a perceptually linearcolor space like the Munsell space which is based on the opponent theory of color vision. Our experience is that that may be reaching too far and that limiting the paradigm to two-space may be wise. Note the color-space can be quantized to produce abrupt edges which may help a scientist see global variations better. From a signal processing perspective, color is a way to represent a 3D vector quantity in a single sample (pixel) and thus provide a dense display of multivariate information.

Visualization of Time-Varying Data

We have already discussed visualization of time-varying data to some extent; we have just not talked about the problems in presenting it in a time-varying fashion. For example, flow data are inherently time-varying. Let's now discuss the pitfalls in presenting it in such a way.

To visualize time-varying data, sequences of images are often created using one or more of the aforementioned techniques, and then animated. For example, rendering a sequence of stills of instantaneous colorwheel images or appropriate LIC images allows the user to see flow dynamics.

In visualizing time-varying data, we must address temporal aliasing, which is caused by undersampling in the temporal dimension, and can result in jerky motion or false impressions of time-varying phenomena. Temporal aliasing is often observed as detailed objects (e.g. wagon wheels) appearing to spin backwards and is the cause of the "vortex rotation miscalculations" discussed below. If a vortex (e.g., an ocean eddy or a cyclone) is rotating at one revolutionper day, it must be sampled at least twice a day to be able to accurately reconstruct its revolution rate. Rotations of greater than a half revolution cannot be shown per frame without aliasing (confusion).

Translational motion is a different problem. As shown in [27], the eye will differentiate (i.e., see stroboscopic-like motion as opposed to continuous motion) if an object moves more than a few degrees in the eye's field of view over a few milliseconds. The trade-off, as with spatial filters, is between blurring, aliasing, and ringing. The solution is to blur the object to simulate a desired shutter speed. Usually a linear filter will suffice. The major problem with temporal anti-aliasing is not knowing the degree to which the viewer's eye will track the motion. If the viewer's eye follows the moving object (visual pursuit), the object will look blurry if the temporal filter is optimized for the perception of smooth motion [41]. If the objective is simply smooth motion (i.e., a better presentation), then linear interpolation in the temporal dimension is often sufficient. However, if the objective is a correct representation of something like fluid flow, then more physically-based interpolation schemes may need to be considered. For example, in dynamic physical oceanography, sampling on the order of every 12 hours may be necessary in the higher latitudes to resolve time-varying features like eddies. However, in the equatorial regions, the time scales are much longer, so sampling every 36 hours may be sufficient.

Applications

In the following sections, we consider various signal processing issues in visualizating dynamic physical oceanography data.

Feature Detection, Tracking, and Animation

Feature detection in data volumes can be viewed as analogous to signal detection at a radar receiver. It is the radar signal detector that recognizes useful signals embedded in noise in a time sequence. For a data volume, the feature detection algorithm must discern important features from noise. Noise can be conceptualized as anything that is not a feature.

A feature is a particular phenomenon that exists in the data. It is often the primary interest, e.g., eddies and fronts in oceanographic visualization. From the application point of view, a feature is a high-level object which has a number of important characteristics, which distinguish the feature. The properties of a feature vary over time and in significance. Feature extraction is performed by identifying the essential phenomena in the data according to the feature specification. A feature may emerge at any time, persist for a while, then dissipate. Alternatively, a feature may split or two features may merge.

A complete feature extraction algorithm for time-varying data consists of feature detection, feature tracking, and feature description. Feature detection is recognizing feature instances in each individual time step. Feature tracking is identifying the feature instances that belong to the same feature. Feature description is organizing the extracted features into compact and meaningful abstractions that are amenable to visualization and storage.

From the signal processing point of view, a feature detector/extractor can be considered a programmed filter. It processes the data volumes, locating, identifying, and reconstructing the features. In designing and applying feature detection algorithms, some problems arise, similar to those in traditional signal processing domain. These issues are discussed below, along with some feature detection applications.

Scalar Data Examples

Features are amorphous objects distinguished by a particular range of functional values (signal points) or by sharp gradients. Thus detecting features requires finding all the connected signal points that share the same feature property, e.g., high temperature, low pressure, etc., or locating the boundaries that enclose the features. For example, a feature detection algorithm was reported in [47, 50, 51] to detect and visualize vortices in computational fluid dynamics (CFD) data. The algorithm starts by finding the maxima in the vorticity field. Then "similar" points around the maxima are recognized to constitute the feature objects.

Directly locating a feature's boundary usually relies on an edge-detection technique. An edge in 1D is a single point with an abrupt change in signal, while in 2D and 3D it is a line and a surface, respectively. Detecting edges involves implementing an edge filter capable of identifying the true edges in the presence of noise. Tremendous effort has been made in optimizing edge filtering for various applications in image processing. However, for 3D data reliable edge detection is still a challenge. In [60, 61], an edge-based technique was developed to detect eddies in oceanographic data. In the algorithm, edge information is calculated by using a 3 x 3 x 3 edge operator [37]. The feature recognition is predicated on edges as well as the original functional values.

The identification criteria include the feature's size, shape, etc. An example of the results is shown in Fig. 29 that contains two visualizations of ocean temperature data from the Gulf of Mexico. The colors indicate temperatures in a spectrum color map: red means higher temperature and blue lower temperature. The background is the ocean bottom. The main interest is the eddies' propagation characteristics (path, size, rotation speed, temperature, etc.).

The figure shows how feature extraction improves the visualization effectiveness. The left image was created by rendering the data volume with a standard surface fitting method [31] without feature extraction. Although two eddies are perceivable, they are obscured by noise. The image on the right was produced with the help of feature extraction. The visualization was tuned by the characteristics of the features to be extracted (eccentricity near one, smooth shape, minimum/maximum size, etc.), resulting in better accuracy and little noise interference. The extracted features clearly do not contain the same functional values. The detection algorithm should allow such variations in recognizing features. This is also true along the time axis.


Figure 29. Ocean features (eddies) in the Gulf of Mexico extracted for visualization: (a) a surface fitting algorithm is used to generate surfaces on which the functional (scalar) value is constant. (b) modeling is performed to isolate physical phenomena, e.g., eccentricity must be arbitrarily near unity, a minimum and maximum size is imposed, there must be a surface at the sea surface, etc.


The four curves in Fig. 30 show the functional values of the larger extracted eddy in Fig. 29, at four different depths over a 150-day interval sampled every 15 days. The large variance in the boundary values over the 150 day interval requires an adaptive algorithm. A series of small multiples is shown in Fig. 31. The order of the semi-monthly images is left to right, top to bottom. The feature merging, evolving and splitting are easily observed in the series of small multiples.




Figure 30. The functional values (C) of the larger extracted eddy in Fig.29 at four depths over a 150-day interval.




Figure 31. A small multiple of extracted eddies which shows the variance in their characteristics over space and time. The sequence progresses from left to right and top to bottom (lexographical order). The timestep between images is 15 days.The feature merging, evolving, and splitting are easily observed.


Flow Data Examples

Features in flow fields exhibit particular flow patterns, e.g., a vortex in CFD and an eddy in ocean circulations. One way to locate them involves the derivation of other mathematical quantities associated with the flow patterns [3].

For instance, a vortex can be found by analyzing the vorticity, which points in the direction of the vortex core and has large magnitudes along the core. The performance depends on the accuracy of the calculated vorticity in this case. The performance may be improved by correlating the vorticity to other co-existing physical fields, i.e., the pressure along the vortex coreshould be a local minimum.

An example of detecting features directly from the flow fields is found in [62]. The application is to extract ocean eddies from the output of an ocean circulation model, where the auxiliary flow quantities are not readily available. The algorithm consists of three steps. First, a critical-point method [49] is used to identify all flow patterns that could indicate a possible feature. In the second step, the properties of the feature candidates are estimated by "molding" a geometric deformable model about each feature candidate. The vector field is used to guide the model deformation, which starts at the critical point and grows out until the deviation between the vector field and the tangents to the closed polygon exceeds a threshold. The feature instances are recognized using the specified feature properties and the detection criteria. Finally, a tracking process is used to track the detected feature instances.

The tracking algorithm predicts the feature evolution over time and uses a weighted probability function to match the feature instances. Figure 32 shows an example of a feature (eddy) instance that is extracted from an ocean flow volume using this algorithm. In Fig. 32a, three extracted eddies are visualized in the top layer by using simple geometric models. In Fig. 32b the underlying flow field in the top layer is shown.


Figure 32. (a)The top of three extracted eddies in the Gulf of Mexico with the ocean bottom as context. (b)the corresponding flow field in the top layer.



Tracking and Animation

Usually, features are temporal phenomena. A completely identified feature thus consists of all its instances tracked over time. In tracking the feature instances, the tracking algorithm depends on the coherence of the feature's most important properties. Since these properties vary continuously, making the feature dissimilar to itself from time to time, successful tracking depends on finding the best match between features instances and there being a sufficient probability "gap" to determine the true match. This expectation, in turn, largely relies on there being only small changes in the feature's properties between sampling instances. In most cases, the sampling rate for a dataset is constant. If it is low with respect to the feature's dynamic rate, tracking errors will occur. A simple example is given in Fig. 33 to show the relationship in a tracking process.

Note that there is a one timestep lag between the translational speed and the tracking probability curves, since the tracking probability measures the similarity of the current instance to the previous one. The translational speed is estimated at each time instance and is in centimeters per second. The tracking probability is a coherence measure reflecting the similarity of each feature instance to its predecessor . In this example, the tracking probability is based on the predicted location, curl similarity, and size similarity. Because the temporal sampling rate is constant, alarger translational speed implies a lower spatial sampling rate. It is clearfrom the figure that when the eddy speeds up, causing a lower sampling rate, the tracking probability drops.



Figure 34a. Probability distribution of two important dynamic properties of features: (a) traveling speed in meters per second, and (b) rotation frequency in rotations persampling interval. The large differences in the peak values and the significant differences in the shape of the distribution curves necessitate a robust, dynamic feature detection and extraction algorithm.


Even in the same dataset, features in different areas may have different characteristics. In Figs. 34a and 34b are statistical comparisons of two properties of detected features (eddies) [62] in the same dataset. Figure 34a includes two distributions of traveling speed. They show that the features in region B move faster than those in region A, i.e., at higher speeds, the dotted line which represents Region B is above and extends beyond the solid line which represents Region A. The curves in Fig. 34b are the rotation frequency distributions. Again, the features in Region B exhibit larger dynamic variations, i.e., less small values and more large values.

When animating the tracked features the feature's dynamic properties can be mapped onto the feature realization, providing more information. Figure 35 is a snapshot of an animated ocean eddy at the top layer of the data [62]. The red line is the trajectory of the eddy. A HSV color scheme is used to visualize eddy properties, with the hue varying from yellow to red to show the rotational speed. To show the rotation rate, a random pattern is generated and used to modulate the brightness, resulting in "spokes" radiating from the eddy center. During animation, the pattern is rotated appropriately betweentimesteps. Only hue (H) and brightness (V) are varied due to the limitations of the human visual system. "Contour" patterns are created to show the shape of the eddy by introducing a small amount of brightness deviation at every two "contour" lines, i.e. circles distorted to conform to the local flow field.

Figure 35. One frame from an animation of a moving eddy. The red line is the trajectory of the eddy. The radial pattern is generated so that the rotation between timesteps can be seen. The concentric "contours" are used to show the flow pattern of the eddy. The background is the ocean bathmetry.



Multiresolution Visualization and Analysis

Since computer technology, and in particular I/O, is usually the limiting function in scientific visualization, reducing the storage space and the access time is important in the exploratory visualization process. One approach is multiresolution visualization. The principle is to transmit the least amount of data necessary to generate a usable approximation of the original data. If the user requires further refinement, more data are fetched into the memory and a higher resolution dataset is reconstructed. By exploiting certain wavelet bases [12], a fast progressive algorithm can be developed and functional values can be reconstructed in the continuous space with a simple polynomial. Although the lower resolution stages are only approximations, the data representation can be reconstructed perfectly with the addition of the final "detail" stage. Lossless compression is possible, even when the data are floating-point numbers. By changing the representation from 32-bit values to wavelet coefficients a more succinct, but lossless, representation is possible. See Fig. 36.


Different wavelets are optimal for different applications [43]. For visualization, Muraki [38] has applied a truncated version of the Battle-Lemari# wavelets to volume data and proposed a superposition algorithm for reconstructing the functional values in continuous space. The major deficit of Muraki's technique is the time-consuming reconstruction procedure, since it emphasizes smoothness over speed. By using an infinitely-supported function, such as the Battle-Lemarie' wavelets as the superposition basis, a complex approximation algorithm has to be utilized to reconstruct the functional values in the continuous space. This greatly limits its value in interactive applications.

A wavelet transform (WT) based on biorthogonal splines can overcome these problems. The lengths of all filters can be short. Since all the filter coefficients are dyadic rationals, all multiplication and division operations can be simplified to addition and subtraction for floating-point functional values or to shifting for integer values. This makes the transform fast and lossless coding possible. Symmetry is easily achieved, resulting in a better handling of the data boundary [1, 13] and half the multiplicative operations.

Biorthogonal Wavelet Transform

One stage of a biorthogonal WT decomposition and reconstruction scheme [13] is illustrated in Fig. 37. The user picks two filter pairs: g, h and and [13]. The four filters are related by:

(2)

where is data input at a given resolution level . In the decomposition stage, data are convoluted with a pair of filters and . Often, is a low-pass filter while is a high-pass filter. As a result, the can be viewed as a coarser approximation of . If the filtered data is downsampled, the total number of samples in level j is the same as that of level j-1. This procedure is usually repeated several times on , as indicated in Fig. 36, to give a multiresolution representation. With biorthogonal wavelets, and , as would be the case with orthonormal wavelets. However, after reconstruction, the final result is identical to .

If and are the Fourier-transforms of and , according to [13], a sufficient condition to make them perfect reconstruction filters is:

(3)


where (see Table 2). If , and are called spline filters, since the related scaling function is a B-spline function [52].

Table 2 lists examples of bases of this family. and are z-transforms of and . Note that the denominators of the coefficients are all in the form , which makes them perfect candidates for visualization applications. Bases can be chosen from this family depending on implementation requirements, such as speed and smoothness. If a set of "longer" bases is used, the frequency separation property of h and g will be better, but at the expense of more computational complexity and a lower lossless compression ratio. The (lossless) compression is smaller since the information cannot be as well localized. For efficient exploratory visualization, the speed of the inverse WT tends to be of more interest than the smoothness of the reconstruction. Fig. 38 shows the scale function and the wavelet function for the and wavelet bases.








Reconstruction

If the and wavelet bases are used, the functional value at x can be reconstructed using the following formula:
(4)

When is a set of compactly supported piecewise B-spline functions, f(x) can be written in a closed polynomial form with order . Assume the and are of order 1, as shown in Fig.38. Then, the process of reconstructing the function values in a contiuous space from the WT coefficients is illustrated in Fig. 39. In each interval [i, i+1), the reconstructed functions are linear functions. Thus Eq. 4 can be simplified to:

(5)


The exact function value for any x can be computed through linear interpolation of the functional values at the two discrete neighbors that surround x; the resulting is continuous. When the order of and is larger than 1, Eq. 4 can still be written in a closed polynomial form with order of . Then will be continuous. If the tensor product of the one-dimensional bases is taken as the bases for the multidimensional WT, the same conclusions can be derived in that space.




Example

Consider a scheme using the wavelet bases shown in Fig. 38, applied to time-varying ocean model data [23, 53, 54] sampled on a 337 x 468 x 6 grid at 120 timesteps per year. We visualize three floating-point variables --layer thickness (a scalar), and current data (a 2D vector). Thus, the exploratory scientific visualization task involves rendering data from eighteen 75 MB data files simultaneously. From a compression perspective, each of the six time-varying layers is considered separately.

By masking out the time-invariant contextual information (the land mass),we need only reconstruct 66% of the area at each time step (Fig. 40a). However,the approximated ocean area must cover the entire original ocean area (Fig. 40b). For grid points in the approximated ocean area but not in the original area, interpolation and extrapolation techniques are used to yield functional values. This technique introduces discontinuities at the data boundary.



Applying a WT twice in each spatial dimension and once in the temporal dimension, as shown in Fig. 41, allows us to obtain a 32:1 compression ratio, only if the lowest order coefficients are used to visualize the data. Since only 66% of the data change per timestep, the actual compression ratio is 50:1.



Figure 41: Wavelet transform coefficients of 3-D ocean model data.



In Figure 42, we visualize layer thickness, which is a scalar field, using the original functional values and the lowest order WT coefficients. The rendering speed when using just the lowest order coefficients is about ten frames per second on a Silicon Graphics Indigo2 workstation, which is sufficient in most applications. The reconstructed image is of high quality and meets the requirements for further scientific visualization processes. For the 2D vector field, the two components are encoded independently and streamlines generated both for the original field and the field reconstructed using only the lowest order WT coefficients. The results are shown in Fig. 43.


Figure 42. Layer thickness for the NE Pacific Ocean: (a)original thickness data (b) reconstructed data (compression ratio of 50:1).



Figure 43. Velocity data for the NE Pacific Ocean: (a) original data (b) reconstructed data (compression ratio of 50:1).


In most physical phenomenon, there is an inherent hierarchy of detail. Applying a topology extraction algorithm [20] to this vector field, the global and the stable topology structures remain relatively unchanged, even at a 16:1 compression ratio (Fig 44). By using a refinement control strategy, such as the one proposed by Blandford [4], we require only a small amount of data for a better reconstruction. Using more coefficients, we can extract more and more local and unstable features. These results are due to the low-pass effects of the WT.


Figure 44. Vector field topology extraction on reconstructed data: (a) using original data (b) using 1/4 of the WT coefficients (c) using 1/16 of the WT coefficients.


Conclusions

Signal processing plays an important role in scientific visualization, i.e., the process of mapping data into images. The developers of algorithms and systems and the users of those algorithms and systems must exercise care to prevent seeing things in the data that are not there. Although the scientific visualization process includes computer graphics, computational geometry, and cognitive science, it is signal processing, image processing, and computer vision that are at the heart of the whole process. As Westover [59] has so well noted, adequate attention must be paid to the signal processing aspects of the process to obtain correct images. Computer graphics and geometric entities are simply an intermediate step, often used due to a familiarity with a representation or for ease of use.

As image display technologies improve in their image fidelity and as computer graphics hardware sub-systems increase in their performance, realistic, realtime, interactive scientific visualization will be possible. However, the imagery will provide incorrect and misleading information if inferior sampling and reconstruction algorithms are used.

Although most scientific visualization algorithms involve signal processingto some extent, two important applications of signal processing in scientific visualization are (1) automatic feature detection, analysis, and tracking, and (2) multiresolution analysis. Automatic feature detection lets the computer do the work, relieving the scientist from repetitive tedium, enabling him to exploit the knowledge of others, and allowing him the time to do something the human can do best -- explore. Multiresolution analysis allows the scientist to exploit the multiscale processing humans perform routinely. As we beginto investigate an environment, we seek first to quickly get the big picture and then to zoom in on the details. Generally, it is better if the big picture obscures information than presents misleading information.

In summary, while computer technology remains the largest constraint in exploratory scientific visualization,the signal processing aspects must be addressed to improve the accuracy of the the visualization schemes.

Acknowledgements

The authors wish to thank Jonathan Barlow, Cass Everitt, Kelly P. Gaither, Andreas Johannsen, and Hai Tao for providing some of the figures. Both authors have profited considerably from many extensive discussions with Bernd Hamann, especially those related to geometric modeling and vector visualization issues. The suggestions and corrections of the five anonymous reviewers are graciously acknowledged.

This work was supported in part by the National Science Foundation, the Strategic Environmental Research and Development Program, the Advanced Research Projects Agency, the Office of Naval Research, the Naval Oceanographic Office, and the U.S. Army Corps of Engineers.

Robert J. Moorhead is at the NSF Engineering Research Center for Computational Field Simulation, Mississippi State University, Mississippi State, MS. Zhifan Zhu is at the Mississippi State University Center for Air Sea Technology,Stennis Space Center, MS.

References (in alphabetical order)

1. M. Antonini, M. Barlaud, P. Mathieu, and I. Daubechies, "Image Coding Using Wavelet Transform," IEEE Trans. on Image Processing, Vol. 1, No. 2, April 1992, pp. 205-220.

2. G. Bancroft, T. Plessel, F. Merritt, P. Walatka, and V. Watson, "Scientific Visualization in Computational Aerodynamics at NASA Ames Research Center," IEEE Computer, Vol. 22, No. 8, August 1989, pp. 89-95.

3. D. C. Banks and B. A. Singer, "Vortex Tubes in Turbulent Flows," IEEE Visualization '94 Proceedings, Oct. 1994, Washington, D.C., pp. 132-139.

4. R. P. Blandford, "Wavelet Encoding and Variable Resolution Progressive Transmission," NASA Space & Earth Science Data Compression Workshop Proceedings, April 1993, pp. 25-34.

5. J. Blinn, "What We Need Around Here Is More Aliasing," IEEE Computer Graphics & Applications, Vol. 9, No. 1, Jan. 1989, pp. 75-79.

6. J. Blinn, "Return of the Jaggy," IEEE Computer Graphics & Applications, Vol. 9, No. 2, Apr. 1989, pp. 82-89.

7. J. Blinn, "Compositing, Part 1: Theory," IEEE Computer Graphics & Applications, Vol. 14, No. 5, Sept. 1994, pp. 83-87.

8. J. Blinn, "Compositing, Part 2: Practice," IEEE Computer Graphics & Applications, Vol. 14, No. 6, Nov. 1994, pp. 78-82.

9. K.W. Brodlie et al (Editors), Scientific Visualization Techniques and Applications, Springer-Verlag, 1992.

10. B. Cabral and L. Leedom, "Imaging Vector Fields Using Line Integral Convolution," ACM SIGGRAPH 93, July 1993, pp. 263-270.

11. K. Challapali, X. Lebegue, J. S. Lim, W. H. Paik, R. Saint Girons, E. Petajan, V. Sathe, P. A. Snopko, and J. Zdepski, "The Grand Alliance System for US HDTV, " IEEE Proc., Vol. 83, No. 2, Feb. 1995, pp. 158-174.

12. M. A. Cody, "The Fast Wavelet Transform," Dr. Dobb's Journal, Vol. 17, No. 4, April 1992.

13. I. Daubechies, Ten Lectures on Wavelets, CBMS-NSF Regional Conference Series in Applied Mathematics, no. 61, Society for Industrial and Applied Mathematics, Philadelphia, PA, 1992.

14. T. Elvins, "A Survey of Algorithms for Volume Visualization," Computer Graphics, Vol. 26, No. 3, August 1992, pp. 194-201.

15. J. Foley, A. van Dam, S. K. Feiner, and J. F. Hughes, Computer Graphics: Principles and Practice, Second Edition, Addison-Wesley, 1990.

16. L. Forssell, "Visualizing Flow Over Curvilinear Grid Surfaces Using Line Integral Convolution," IEEE Visualization '94 Proceedings, Oct. 1994, Washington, D.C., pp. 240-247.

17. R. Franke, H. Hagen, and G.M. Nielson, "Least squares surface approximation to scattered data using multiquadratic functions," Advances in Computational Mathematics, 2(1), 1994, pp. 81-99.

18. P. Hall, "Volume rendering for vector fields," The Visual Computer, Vol. 10, No. 2, 1993, pp. 69-78.

19. B. Hamann, D. Wu, and R. J. Moorhead II, "On particle path generation based on quadrilinear interpolation and Bernstein-B#zier polynominals," IEEE Transactions on Visualization and Computer Graphics, Vol. 1, No. 3, Sept. 1995.

20. J. L. Helman and L. Hesselink, "Representation and Display of Vector Field Topology in Fluid Flow Data Sets," IEEE Computer, Vol. 22, No. 8, Aug. 1989, pp. 27-36.

21. J. L. Helman and L. Hesselink, "Visualizing Vector Field Topology in Fluid Flows," IEEE Computer Graphics & Applications, Vol. 11, No. 3, May 1991, pp. 36-46.

22. A. Hin and F. Post, "Visualization of Turbulent Flow with Particles," IEEE Visualization '93 Proceedings, San Jose, CA, Oct. 1993, pp.46-52.

23. H. E. Hurlburt, A. J. Wallcraft, Z. Sirkes and E. J. Metzger, "Modeling of the Global and Pacific Oceans: On The Path to Eddy-Resolving Ocean Prediction," Oceanography, Vol. 5, No. 1, 1992, pp. 9-18.

24. IEEE Computer, Special Issue on Visualization, Vol. 27, No. 7, July 1994.

25. IEEE Computer Graphics & Applications, Special Issue on Visualization, Vol. 14, No. 5, Sept. 1994.

26. IEEE Visualization '94 Proceedings, Washington, D.C., Oct. 1994.

27. A. K. Jain, Fundamentals of Digital Image Processing, Prentice Hall, Englewood Cliffs, NJ, 1989.

28. A. Johannsen and R. Moorhead, "AGP: Ocean Model Flow Visualization," IEEE Computer Graphics & Applications, Vol. 15, No. 4, July 1995.

29. A. Johannsen and R. Moorhead, "Flow Visualization of Basin-Scale Ocean Data," IEEE Visualization '94, Washington, D.C., Oct. 1994, pp. 355-358.

30. H. Levkowitz, "Linearized Optimal Color Scale compared with Linearized Gray Scale for Medical Image Data," IEEE Computer Graphics & Applications, Vol. 12, No. 1, Jan. 1992, pp. 72-80.

31. W. E. Lorensen and H. E. Cline, "Marching Cubes: A High Resolution 3D Surface Construction Algorithm", ACM SIGGRAPH 87, July 1987, pp. 163-169.

32. K.-L. Ma and P. Smith, "Virtual Smoke: An Interactive 3D Flow Visualization Technique," IEEE Visualization '92 Proceedings, Boston, MA, Oct. 1992, pp. 46-53.

33. T. Malzbender, "Fourier Volume Rendering," ACM Trans. on Graphics, Vol. 12, No. 3, July 1993, pp. 233-250.

34. N. Max, B. Becker, and R. Crawfis, "Flow Volumes for Interactive Vector Field Visualization," IEEE Visualization '93 Proceedings, San Jose, CA, Oct. 1993, pp. 19-24.

35. N. Max, R. Crawfis, and D. Williams, "Visualizing Wind Velocities by Advecting Cloud Textures," IEEE Visualization '92 Proceedings, Boston, MA, Oct. 1992, pp. 179-184.

36. D. P. Mitchell and A. N. Netravali, "Reconstruction Filters in Computer Graphics," ACM SIGGRAPH 88, August 1988, pp.221-228.

37. R. J. Moorhead and Z. Zhu, "Feature Extraction for Oceanographic Data Using a 3D Edge Operator," IEEE Visualization '93 Proceedings, San Jose, CA, Oct. 1993, pp.402-405.

38. S. Muraki, "Volume Data and Wavelet Transforms," IEEE Computer Graphics & Applications, Vol. 13, No. 4, July 1993, pp. 50-56.

39. G. M. Nielson and B. Hamann, "The Asymptotic Decider: Resolving the Ambiguity in Marching Cubes," IEEE Visualization '91 Proc., San Diego, CA, Oct. 1991, pp. 83-91.

40. F. H. Post and T. van Walsum "Fluid Flow Visualization," Focus on Scientific Visualization, H. Hagen, H. M#ller, G. M. Nielson (eds.), Springer-Verlag, Berlin, 1993, pp. 1-40.

41. M. Potmesil and I. Chakravarty, "Modeling Motion Blur in Computer-Generated Images," ACM SIGGRAPH 83, July 1983, pp. 389-399.

42. A. Puri, R. Aravind, and B. Haskell, "Adaptive frame/field motion compensated video coding," Signal Processing: Image Communications, Vol. 5, No. 1-2, Feb. 1993, pp. 39-58.

43. O. Rioul and M. Vetterli, "Wavelets and Signal Processing," IEEE SP Magazine, Oct. 1991, pp 14-38.

44. P. K. Robertson and J. F. O'Callaghan "The Generation of Colour Sequences for Univariate and Bivariate Mapping," IEEE Computer Graphics & Applications, Vol. 6, No. 1, Feb. 1986, pp. 24-32.

45. P. Robertson, "Visualizing Colour Spaces: A User Interface for the Effective Use of Perceptual Colour in Data Displays," IEEE Computer Graphics & Applications, Vol. 8, No. 5, Sept. 1988, pp. 50-64.

46. M.A. Sabin, A Survey of Contouring Methods, Computer Graphics Forum, vol. 5, 1986, pp. 325-339.

47. R. Samtaney, D. Silver, N. Zabusky, and J. Cao, "Visualizing Features and Tracking Their Evolution," IEEE Computer, Vol.27, No.7, July 1994, pp. 20-27.

48. L. L. Schumaker, "Computing optimal triangulations using simulated annealing," Computer Aided Geometric Design, Vol. 10, 1993, pp. 329-345.

49. C.-F. Shu and R. C. Jain, "Vector Field Analysis for Oriented Patterns," IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 16, No. 9, Sept. 1994, pp. 946-950.

50. D. Silver and N. J. Zabusky, "3D Visualization and Quantification of Evolving Amorphous Objects," SPIE Vol. 1459, San Jose, CA, February 1991, pp. 116-126.

51. D. Silver and N. J. Zabusky, "Quantifying Visualizations for Reduced Modeling in Nonlinear Science: Extracting Structures from Data Sets," The Journal of Visual Communications and Image Representation, Vol. 4, No. 1, 1993, pp. 46-61.

52. P. Sriram and M. W. Marcellin, "Image Coding Using Wavelet Transforms and Entropy-Constrained Trellis-Coded Quantization," IEEE Trans. on Image Processing, Vol. 4, No. 6, June 1995, pp. 725-733.

53. H. Tao and R. Moorhead, "Progressive Transmission of Scientific Data Using Biorthogonal Wavelet Transform," IEEE Visualization '94, Washington, D.C., Oct. 1994, pp. 93-99.

54. H. Tao and R. Moorhead, "Lossless Progressive Transmission of Scientific Data Using Biorthogonal Wavelet Transform," IEEE International Conference on Image Processing, Austin, TX, Nov. 1994, Vol III, pp. 373-377.

55. T. Totsuka and M. Levoy, "Frequency Domain Volume Rendering," ACM SIGGRAPH 93, July 1993, pp. 271-278.

56. K.-H. Tzou, "Progressive Image Transmission: A Review and Comparison of Techniques," Optical Engineering, Vol. 26, No. 7, July 1987, pp. 581-589.

57. C. Upson, R. Wolff, R. Weinberg, and D. Kerlick, "Two and Three Dimensional Visualization Workshop," Shortcourse #13, ACM SIGGRAPH 89, August 1989.

58. J. van Wijk, "Rendering Surface-Particles," IEEE Visualization '92 Proceedings, Boston, MA, Oct. 1992, pp. 54-61.

59. L. Westover, "SPLATTING: A Parallel, Feed-Forward Volume Rendering Algorithm," Ph.D. Dissertation, (also Dept. of Computer Science Tech. Report TR91-029), University of North Carolina at Chapel Hill, 1991.

60. Z. Zhu, R. J. Moorhead II, H. Anand and L.R. Raju, "Feature Extraction and Tracking in Oceanographic Visualization," SPIE Vol. 2178, San Jose, CA, Feb. 1994, pp.31-39.

61. Z. Zhu and R. J. Moorhead II, "Exploring Feature Detection Techniques for Time-Varying Volumetric Data," IEEE Workshop on Visualization and Machine Vision Proceedings, Seattle, WA, June 1994, pp.45-53.

62. Z. Zhu and R. J. Moorhead II, "Extracting and Visualizing Ocean Eddies in Time-Varying Flow Fields," 7th International Symposium on Flow Visualization, Seattle, WA, Sept. 1995.