Scientific Visualization is the mapping of scientific data and information to imagery to gain understanding or insight. The signal processing aspects of the mapping process are often underestimated. Issues such as sampling rate,reconstruction filters, the human visual system, etc. have significant effects on data analysis and presentation. Data analysis and presentation involve transforming numeric representations into imagery and viewing that imagery via either a softcopy device (e.g., a cathode ray tube or CRT) or a hardcopy device (e.g., printer output, 35mm film, videotape, etc.). In performing these transformations and presentations, a number of signal processing issues should be considered. Aliasing and filtering [5, 6, 7, 36] are important issues sincemost data that are visualized are sampled data, discrete in space, time, and value. Color theory and models [44, 45] are important since color is often used to represent functional value. Due to the often massive quantity of information that needs to be visualized, automatic feature extraction [37, 50, 51, 60, 61, 62], data compression [4, 56], and multiresolution visualization [38, 53, 54] are some of the most active areas of signal-processing-oriented scientific visualization research today, especially in the areas of multivariate and flow visualization [24, 25, 26]. In addition, recent publications [33, 55] have shown a way to use the Fourier Projection-Slice Theorem to generate images of volume data faster.
The visualization process usually involves creating geometric objects (e.g.,points, lines, and polygons) from a set of discrete values at a finite number of locations in 3D space. These geometric objects are then rendered into one or more images. In some visualization techniques, the data are directly mapped into imagery, bypassing the intermediate step of mapping the data into one or more geometric representations.
Mapping Numbers to Imagery
Shape
To explore a dataset, a scientist may map the data in a number of differentways. One mapping is to have the functional values (temperature, pressure, humidity, salinity, density, velocity, stress, etc.) determine the shape of an object. In Fig. 1 is an image depicting sea surface height (SSH) in which the height is represented by the amount the plane representing mean sea level is deformed.
Another mapping that is often used is mapping different values to differentcolors. A number of color mappings have been previously recommended [30, 44, 45]. There are actually two classes of colormaps: shading and functional. In shading colormaps, the colors are determined by the lighting, the surface properties, and the relationship between the light(s) and the surface. Shading colormaps are most frequently used to help visualize surface shape (one is used in Fig. 1, for example). In functional colormaps, the color at each point is determined by mapping functional values (pressure, temperature, velocity components, etc.) into colors.
Opacity
For data defined in 3D space, a third mapping is possible. The functional
values can be thought of as representing the local density within the volume. An
image is formed by projecting a light through the volume and creating an image
on a surface on the opposite side of the volume in much the way an X-rayimage is
formed (see Fig. 27). Usually the opacity mapping is proportional; i.e., the
larger the function values along the traversed path the more the light is
attenuated.
Exploratory versus Presentation Visualization There are 2 flavors of visualization: passive (presentation), usually done
via hardcopy and active (exploratory or interactive), usually done via softcopy.
In interactive visualization, the user can interactively change his view of the
data, i.e., move through the data, rotate one or more objects, change the
colormap(s), or move a light source around to simulate the way one would explore
a new environment or object. To interactively visualize a time-varying 3D
dataset, the computer upon which the data are visualized needs a large main
memory to hold the dataset and a graphics subsystem which will map world
coordinates (e.g., meters) into vertices of geometric primitives (e.g., lines or
polygons).It must also be able to rotate, translate, and zoom that set of
geometric primitives, and to render those geometric primitives into an image
[15, chapter 18].
Aliasing is the most common signal processing problem in visualization.
Toexplore a large dataset, a scientist will often subsample the data volume. It
is not unusual for a scientist to begin to visually analyze large datasets by
considering only every tenth data point. This subsampling usually introduces
significant visual artifacts (e.g., mislocated edges, false contours, etc.)into
the imagery that mislead the scientist. Although most signal processing experts
understand the potential problem, many users of scientific visualization
toolkits [2, 9] and techniques are unaware of signal processing issues. Another
simple data reduction technique that is often used is to only consider asmall
section of the total volume.One of the reasons signal processing has to be
considered in visualizing scientific data is that the presentation medium is
usually a sampled surface, e.g., a CRT.
In Fig. 9a - 9d, the triangulation is as in the figure below:
which creates long "ridges" and "valleys" along the diagonals. If instead,
the triangulation is alternated, like the figures below,
the peaks and bottoms are apparent (Fig 11a). Fig. 10c shows the improvement
when a sinc filter is used.
With the widespread use of computer monitors capable of displaying images
with 256 shades of red, green, and blue, respectively, color has become a common
and effective tool in visualization. In scientific visualization,
differentapplications or techniques utilize different color spaces (models).
Since most sensors and displays are based on red, green, and blue filters or
phosphors, the techniques most closely associated with input and output -- e.g.,
histogram equalization and image compositing -- tend to be based on the RGB
color model [15]. The hue/saturation/value (HSV) color space is often used in
direct volume visualization and other mapping-oriented algorithms, since it is a
reasonable model of the perceptual level of the human visual system. The
colormap used in Figs. 8-11, for example, is the hue range of the HSV color
space, with V (brightness) and S fixed at their maximum value. Other color
models, such as YUV or YCrCb -- both of which are luminance (Y) plus two
chrominance difference signals and are the color models used in television
standards -- are often the representation used when performing image
compression. For more information on color models, see [15]. There are many other signal processing issues in scientific visualization.
Rather than illustrate them with contrived examples, we now describe some
visualization techniques, emphasizing the signal processing aspects in each. A
delineation is made between time-invariant visualization and time-variant
visualizaiton to emphasize the additional siganl processing problems that can
occur when the data to be visualized are time-varying. Although the major emphasis in scientific visualization is on 3D data,
developing better 2D visualization techniques remains an active area of research
and development. The major distinction between image processing and 2D
time-invariant scientific visualization is in the sampling density. In image
processing, functional values sampled on a regular rectilinear grid are mapped
one-to-one onto pixels; in 2D scientific visualization the number of data values
is usually much less than the number of pixels in the resulting image, which
motivates many of the signal processing issues like sampling, interpolation, and
reconstruction. Many of the techniques for higher dimensional spaces are
extensions of the techniques developed for 2D data. Although the domain is a
planar surface in the following discussion, most of the techniques easily extend
to data sampled on a curved surface. Scalar Visualization
Contours
One of the most basic scientific visualization techniques is the generationof
a set of curves within a plane which represent locations of constant functional
value ("isocontours" or "contours"). In Fig. 12 a set of functional values are
extracted from the lower left of Table 1 and x's are placed on the sampled
surface where the functional value of 0.50 would occur based on linear
interpolation between the sample values. In Fig. 13a-13d the x's are connected
in 4 ofthe 16 possible ways; the correct contouring is unclear since there are
saddle points due to the non-planar quadrilaterals formed by adjacent vertices.
This is the same class of problem as shown in Fig. 9c. Sabin [46] presents a
number of techniques that can be used to resolve the ambiguity. A common method
is to split the cell into four triangles by connecting opposite vertices, set
the value at the center of the cell to the average of the values at the four
vertices, find the intersections along each triangle edge, and connect the
points of intersection in the obvious way, which will be unambiguous. This is
shown in Fig. 14a for one of the ambiguous cells. Applying this technique to the
data in Fig. 12 produces the contours in Fig. 14b, which have twice the number
of linear segments per contour as the ones in Fig. 13a. Even more elaborate
reconstruction techniques [15] can be employed. To create an image of N pixels from a 2D sample set with much less than N
values requires some type of interpolation. A number of such techniques exist
and two are shown in Fig. 15. In Fig. 15a is a set of 4 sample points that
represent the vertices of a quadrilateral. The functional values at these 4
sample points can be mapped in a number of ways, one of which is via a color
map. The issue then becomes how to color the pixels between the pixels
associated with the vertices.
Arrows
Multivariate visualization involves visualizing datasets with multiple
functional values at each sample point; the most common task is vector field
visualization. One way to visualize a vector quantity is to use geometric forms
such as arrowsor tufts [40, 57] (see Fig 17). The direction is determined by
thevector sum of the components and the length of the arrow is scaled by the
maximum magnitude within the field. Numerous problems occur with using geometric
forms. First, since the primitive must cover multiple pixels, there is
significant visual ambiguity as to which point the arrow represents; is it the
head, the tail, or somewhere in between? Although the arrow usually represents
the functional value at the tail, visually that is hard to perceive, unless the
underlying grid structure is overlaid as in Fig. 17.
By analyzing the traditional arrow technique from a signal processing
perspective of sampling density and by viewing the visualization as the
interpolationprocess it is, we can develop a higher resolution 2D vector
visualization technique. Examples of a higher resolution scheme are shown in
Fig. 19 and Fig. 20, in which the surface current (a 2D vector field) in the
North Pacific is shown. The technique has been given the name "colorwheel" [28,
29]. In this mapping of vector data, the HSV color space is used. The direction
of the vector is represented by the hue and the magnitude of the vector is
redundantly mapped to both value and saturation since the human visual system
has a hard time distinguishing between changes in value and changes in
saturation over most of the HSV color space.
A common way to obtain a more global view of steady flow is to create
streamlines, which are lines everywhere tangential to the velocity field.
Streamlines can be started anywhere within the field. The creation process is a
curve fitting problem and the determination of the consecutive points on the
curve necessitates consideration of the Nyquist sampling rate and interpolation
theory. Basically, there should be two points on the curve in each cell.
Bilinear interpolation is frequently used for structured grids, while scattered
data interpolation techniques, e.g., Hardy's multiquadric method [17] is usually
used for irregular grids. The curve generation is often based on fourth order
Runge-Kutta methods, but even more accurate methods exist [19, 40]. There is a
speed-accuracy tradeoff in streamline determination. A recently presented technique for visualizing 2D vector fields based on
signal processing concepts is called Line Interval Convolution (LIC) [10, 16].
The LIC algorithm combines a vector field sampled on a uniform rectilinear grid
[10] (or a structured curvilinear grid [16]) with a texture map image (e.g., a
random noise field) of the same dimension as the grid to produce an image in
which the texture has been "locally blurred" by the vector field. The pixels in
the output image are produced by the one-dimensional convolution of the texture
pixels along a streamline (see Fig. 22) with a filter kernel:
where
Yet another common method of visualizing 2D flow was presented in [20].
Itconnects critical points to display the 2D vector field topology. By
connecting the critical points -- i.e., the points at which the vector magnitude
vanishes -- with curves, a skeleton can be defined that characterizes the whole
2D vector field. Icons indicate the type of critical points:
attracting/repelling foci, attracting/repelling nodes, saddle points, and
centers. Examples of theseicons and the classification criteria are shown in
Fig. 24. Images showing only the topology of the vector field eliminate most of
the redundant information. Critical point analysis is in effect a feature
extraction technique and willbe used in the applications section in a feature
detection algorithm and laterto show the effects of multiresolution analysis.
Visualizing 2D data is rather straightforward, in that the domain is easily
conceived as a flat surface with the functional values mapped into geometric
primitives, color, etc.. The domain is simply scaled into the surface area.
There is no reduction in dimensions in going from the model domain to the image
domain, so the relationship between Scalar Data
The process of visualizing data sampled in three-space is usually called
volume visualization or volume rendering. Westover [59] has delineated the major
signal processing issues in scalar volume visualization. However, most
visualization implementations -- including Westover's -- take shortcuts for
simplicity or speed. Elvins [14] provides an excellent summary of the many
scalar volume visualization algorithms, dividing them as most people do into two
categories: surface fitting and direct volume rendering (DVR). Surface Fitting Surface fitting algorithms are basically the 3D extension of contours in two
space. One of the most frequently used techniques is called the Marching Cubes
(MC) method [31]. It marches through a volume voxel-by-voxel locating
andconnecting all points with a particular user-defined value. The "locations
and connections" form the vertices and edges of triangles which in turn form a
tessellated surface.
DVR algorithms [14, 59] map the data in the volume directly into an image;
there are no intermediate geometric primitives as there are in surface
fitting.There are two methods of DVR: feed-backwards, in which the mapping is
from the image plane into the data, and feed-forward, in which the mapping is
from each voxel into the image plane. Sampling rays are projected through the
datavolume and are either accumulated or attenuated, depending on the approach.
As it traverses the volume, a ray's properties, such as color and opacity, are
modified based on the data classification to produce the image. A schematic
diagram of the volume rendering process is shown in Fig 27.
Malzbender [33] and Totsuka and Levoy [55] have recently shown a way to use
the Fourier Projection-Slice Theorem to generate images of volume data faster
than with the spatial domain approaches. Exploiting this theoremallows an image
to be obtained for any view direction and orientation by extracting a 2D plane
of data from a 3D Fourier-transformed volume of data and inverse transforming
just the slice of data. This technique, frequency domain volume rendering (or
Fourier volume rendering), allows an image (projection) to be generated in O Although processing data on regular rectilinear grids is more straight
forward, in many situations the data originates on an irregular grid. Slightly
different algorithms are required for visualization. In general the techniques
tend to be slower, but more accurate than those for structured grids, since the
basic geometric entity in the grid is a triangle in 2D and a tetrahedron in 3D.
Often data on 3D irregular grids are interpolated onto slicing planes for visual
analysis. One often used and simple volume rendering technique is a point cloud
(see Fig. 28a), in which functional values are mapped into a colormap,
positioned at their location in 3D space, and projected onto a view plane. Since
the data is not regularly sampled, moire patterns do not occur. Motion
(reprojection from a different direction) allows the user to see the 3D
structure of the data. Few other DVR techniques exist for data sampled on an
irregular grid [14]. Vector data sampled on irregular grids is easily visualized
as 3D arrows or 3D streamlines.
Vector Data
Visualizing 3D vector data is an active research area [10, 16, 18, 24, 25,
26]. The major problem is the difficulty of showing global and local values.
e.g., a vortex on one scale is a current or a wave on a more highly-resolved
sampling structure. Projecting magnitude and direction from 3D space into one
uncluttered image is a difficult problem.
Some Common Techniques
Techniques that advect smoke, clouds, and flow volumes [32, 34, 35] do a good
job of showing the flow at a global level, but not at a detailed level.
Particle-based [22, 58] techniques show local flow properties well, but don't
doa good job of showing the big picture. Helman and Hesselink [21] have extended
the vector field topology techniques to 3D; unfortunately, the approach is not
capable of producing as complete a view in 3D space as it is in 2D space.
Color Sphere
Hall [18] has developed a technique which visualizes 3D vectors using
perceptually-based color spaces. As with the colorwheel technique [28, 29], the
technique automatically emphasizes and classifies critical points. Focuses and
centers are rotated slices from the color space. The biggest advantages are that
(1) color is invariant under projection, and (2) high sampling rates can be
used. The principal drawbacks of the color-based representations are the lack of
a standard mapping from vector-space to color-space, and user inexperience. The
first drawback can be mitigated by showing the color space used; the second will
only be overcome with use. Hall [18] proposes using a perceptually linearcolor
space like the Munsell space which is based on the opponent theory of color
vision. Our experience is that that may be reaching too far and that limiting
the paradigm to two-space may be wise. Note the color-space can be quantized to
produce abrupt edges which may help a scientist see global variations better.
From a signal processing perspective, color is a way to represent a 3D vector
quantity in a single sample (pixel) and thus provide a dense display of
multivariate information.
We have already discussed visualization of time-varying data to some extent;
we have just not talked about the problems in presenting it in a time-varying
fashion. For example, flow data are inherently time-varying. Let's now discuss
the pitfalls in presenting it in such a way.
In the following sections, we consider various signal processing issues in
visualizating dynamic physical oceanography data.
Feature Detection, Tracking, and Animation
Feature detection in data volumes can be viewed as analogous to signal
detection at a radar receiver. It is the radar signal detector that recognizes
useful signals embedded in noise in a time sequence. For a data volume, the
feature detection algorithm must discern important features from noise. Noise
can be conceptualized as anything that is not a feature.
Scalar Data Examples
Features are amorphous objects distinguished by a particular range of
functional values (signal points) or by sharp gradients. Thus detecting features
requires finding all the connected signal points that share the same feature
property, e.g., high temperature, low pressure, etc., or locating the boundaries
that enclose the features. For example, a feature detection algorithm was
reported in [47, 50, 51] to detect and visualize vortices in computational fluid
dynamics (CFD) data. The algorithm starts by finding the maxima in the vorticity
field. Then "similar" points around the maxima are recognized to constitute the
feature objects. Flow Data Examples
Features in flow fields exhibit particular flow patterns, e.g., a vortex in
CFD and an eddy in ocean circulations. One way to locate them involves the
derivation of other mathematical quantities associated with the flow patterns
[3].
Tracking and Animation
Usually, features are temporal phenomena. A completely identified feature
thus consists of all its instances tracked over time. In tracking the feature
instances, the tracking algorithm depends on the coherence of the feature's most
important properties. Since these properties vary continuously, making the
feature dissimilar to itself from time to time, successful tracking depends on
finding the best match between features instances and there being a sufficient
probability "gap" to determine the true match. This expectation, in turn,
largely relies on there being only small changes in the feature's properties
between sampling instances. In most cases, the sampling rate for a dataset is
constant. If it is low with respect to the feature's dynamic rate, tracking
errors will occur. A simple example is given in Fig. 33 to show the relationship
in a tracking process.
Figure 2.The
fuselage of a small plane with a shading map on the left and afunctional
colormap on the right. The colors in the functional colormap are determined by
the computed pressure. Note the specular highlights along the sharp curves on
the side with the shading map; note the high pressures along the nose cone and
the windshield on the side with the function map.
are
imperceptible or that 16-bit numbers could be used to represent all the color
resolution that can be seen. Colormaps can be viewed much like floating
pointnumberformats. The product of the range and the precision is fixed. To see
a large range you have to give up precision, and vice versa. To increase the
functional resolution of a colormap, i.e., the ability to "see" small
differences in functional values, the range of functional values mapped by the
colormap must be limited.
Figure 3.Sea
surface height shown using various colors.
Figure 4.Sea
surface height shown using various colors and surface
deformation.
Signal Processing Issues
Figure 5.A
continuous line drawn between (2,2) and (10,7).
Figure 6. A
discrete line drawn between (2,2) and (10,7)
Figure 7.An
antialiased line drawn between (2,2) and (10,7) using a one-pixel wide
boxfilter. The intensity of each pixel is proportional to the area covered. On
many devices, the pixels would actually overlap.
, as shown in
Fig. 8. Color is used to help elucidate the 3D shape of the surface. The minimum
functional value (-1) is shown in orange and the maximum value (1) in red; the
same colormap is used in Figs. 8-11. From Nyquist's theorem it is known that
there must be at least 8 samples over the domain
to accurately
reconstruct the function sin(x) or cos(y).To accurately reconstruct the function
sin(x)cos(y) as a surface from uniform spaced samples requires at least 8
samples in each dimension, located at
in the x
direction and
in the y
direction, where N=0,1,.....7.
Figure
8. f(x,y) = sin(x)cos(y) evaluated over the interval in both
dimensions and supersampled at 200x200 to give a good approximation to the
continuous function.
, and
, a surface with
only two peaks instead of 32 peaks is created (Fig. 9a and Fig. 10a). With a 6x6
sampling at
,
where N = 0, 1, 2, 3, 4, or 5 (Fig. 9b and Fig. 10b), the reconstructed surface
undulates, but it is still a poor approximation of the original surface.
Figure
9.Reconstruction of sin(x)cos(y) from four different samplings using linear
interpolation (a triangular filter).
Figure
10.Reconstruction of sin(x)cos(y) from four different samplings using optimal
interpolation (a sinc filter).
Figure
11.Visual difference in two triangulation schemes.
Color Models
Summary
Visualization of Time-Invariant 2D Data
Figure 12.A 2D
range of functional values extracted from the lower left of Table 1 with x's
placed on the sampled surface where the functional value of 0.50 would occur
based of linear interploation between the sample values.
Figure 13.Four
of sixteen possible connections (contours) of the interpolated functional values
in Fig.12.
Figure14.(a)
An example of how to determine the correct contour connectivity in a cell with a
saddle point. The cell is split into four triangles by connecting opposite
vertices, the value at the center of the cell is set to the average of the
values at the four vertices, the intersections arefound along each triangle
edge, and the points of intersections are connected in the obvious way, which is
unambiguous. (b) Application of the technique to the data in
Fig.12.
Shading Techniques
Figure 15. (a)
A quadrilateral with functional values defined only at the vertices. (b) A
colormap used to map functional values to colors. (c) Triangulation used for
flat shading. (d) The flat-shaded quadrilateral with the functional value used
to color each triangle overlayed . (e) Interpolation used for Gouraud shading.
(f) The Gouraud-shaded quadrilateral.
Figure 16.
Current magnitude in the Persian Gulf shown with flat shading and Gouraud
shading.
Figure 17. 2D
vectors shown using arrows on a plane. The lengths of the arrows are
proportional to the magnitude of the vector. Excessively large values are
clipped and extremely small values are eliminated. The underlying grid structure
is drawn to provide a point of reference as to which way the vector is oriented.
Notice how cluttered the image is.
Figure 18. The
2D current within a layer of the ocean visualized using arrows on a surface
which is projected obliquely along with the associated ocean bottom. Note the
significant visual distortion due to the oblique
projection.
"Colorwheel"
Figure 19.
Ocean current in the NE Pacific (a 2D vector field) is shown using color in lieu
of geometric primitives. In this mapping, the HSV color space is used. The
direction of the vector is represented by the hue and the magnitude of the
vector is redundantly mapped to both value and saturation since the human visual
system has a hard time distinguishing between changes in value a nd changes in
saturation over most of the HSV color space. The brown area is the
landmass.
Figure 20.
Same picture as Fig. 19, except that: 1) the smaller vector magnitudes are not
shown and 2) a log-scale mapping of vector magnitude to saturation/value is used
to better match the human visual system's perception.
Figure 21. The
same picture as Fig. 17, except the vectors are shown using the colorwheel
technique instead of arrows.
Streamlines
is a set of
grid cells along the streamline,
is input texture
pixel at grid cell p and
is the
arclength of the streamline from the point (i,j) to where the streamline enters
cell
is the
arclength of the streamline from the point (i,j) to where the streamline exits
cell p, and k(w) is the convolution filter function (usually a box
filter).
Figure 23.
Flow visualization of the ocean current in the Northeast Pacific using the Line
Interval Convolution technique. Red indicates areas of high magnitude flowand
blue indicates areas of low magnitude flow.
Critical
Point Analysis
Figure 24.
Example icons and classification criteria for critical points. R1 and R2 denote
the real part of the eigenvalues of the Jacobian; I1 and I2 the imaginary parts
[20].
Visualization of Time-Invariant 3D Data
in the model
domain and
in device coordinates is easily understood. Visualizing 3D data when the viewing
device is a 2D surface is much more challenging. Since prospective projection of
a data volume onto a flat surface is ambiguous, it is hard to tell relative
distances and mentally perform inverse mappings from device coordinates back
into model coordinates. Although there are many visual cues that can help (depth
cueing, obscuration, lighting and shading, shadows, perspective projections,
projection lines, contextual clues), they often do not give a definitive inverse
mapping. The best visual cue is usually motion; by having the ability to move
the volume about or to move within the volume, the user is able to get a much
better understanding of the data volume.
Figure 25. An
isosurface extracted from a volume of 64 x 64 x 64 density values. The surface
is created connecting points with equal functional value, in this case,
intending to represent the hard structures within the
cranium.
Figure 26. (a)
An isosurface with holes in it. (b) An isosurface without holes in it.
Direct Volume Rendering
Figure
28. Two DVRs from a volume of density values. In both parts the opacity and
saturation of the voxel is directly proportional to the magnitude of the density
value in the voxel, and the hue is determined from the HSV [15] color space by
mapping the range of density values into angles from 0 to 360 degrees such that
the smallest density values are red and the largest density values are
blue-violet. In (a), the density values are assumed to be randomly distributed
within the voxeland mapped as point entities directly to the screen. In (b),
Westover's splatting algorithm [59] is used to generate a more continuous image.
The tradeoff is between time and image
quality.
Frequency Domain Volume
Rendering
time for a volume
of size
.
Visualization of Time-Varying Data
Applications
Figure 29.
Ocean features (eddies) in the Gulf of Mexico extracted for visualization: (a) a
surface fitting algorithm is used to generate surfaces on which the functional
(scalar) value is constant. (b) modeling is performed to isolate physical
phenomena, e.g., eccentricity must be arbitrarily near unity, a minimum and
maximum size is imposed, there must be a surface at the sea surface,
etc.
The four curves in Fig. 30 show the functional
values of the larger extracted eddy in Fig. 29, at four different depths over a
150-day interval sampled every 15 days. The large variance in the boundary
values over the 150 day interval requires an adaptive algorithm. A series of
small multiples is shown in Fig. 31. The order of the semi-monthly images is
left to right, top to bottom. The feature merging, evolving and splitting are
easily observed in the series of small multiples.
Figure 30. The
functional values (C) of the larger extracted eddy in Fig.29 at four depths over
a 150-day interval.
Figure 31. A
small multiple of extracted eddies which shows the variance in their
characteristics over space and time. The sequence progresses from left to right
and top to bottom (lexographical order). The timestep between images is 15
days.The feature merging, evolving, and splitting are easily observed.
Figure 32.
(a)The top of three extracted eddies in the Gulf of Mexico with the ocean bottom
as context. (b)the corresponding flow field in the top
layer.