                Long Abstracts of Papers
	       in the AVS '95 Proceedings

	FYI, these are the long abstracts upon which the
	short 1-2 sentence presentation descriptions 
	in the Advance Program of AVS '94 were based.



--------------------------------------------------------
-----------------DEVELOPERS TRACK ----------------------
---------------------------------------------------------
Chair:  Howard Watkins
---------------------------------------------------------

Author: Mitchell Roth
        Arctic Region Supercomputing Center
        University of Alaska
        Fairbanks, AK 99775-6020
        907-474-5411 (voice)
        907-474-5494 (fax)
        email: roth@arsc.edu

Title:  AVS for the CRAY T3D

Abstract:

AVS is a widely used package for scientific visualization which runs on
platforms ranging from workstations to Cray Research parallel vector
supercomputers.  This paper reports on the CRAY T3D parallel
implementation and performance of several of the compute intensive AVS
modules from the standard libraries. The implementation issues discussed
in the paper are: (1) T3D/Y-MP communication, (2) module flow
control (3) module generation and (4) parallel computational algorithms
for the modules.
-----------------------------------------------------------------------

Authors:        Karen Woys and Mitchell Roth
                Arctic Region Supercomputing Center
                University of Alaska
                Fairbanks, AK 99775-6020
                907-474-6307 (voice)
                907-474-5494 (fax)
                email: woys@arsc.edu and roth@arsc.edu

Title:  AVS Optimization for CRAY Y-MP Vector Processing

Abstract:

The AVS source code is modularized and highly portable, but is not
vectorized to take advantage of the Cray Research, Inc. (CRI) parallel
vector architecture.  This project analyzed and optimized the following
AVS modules for vector processing on a CRAY Y-MP supercomputer:  Read
Field, Downsize, Field Math, Field to Mesh, Compute Gradient, and
Interpolate.  Based on user CPU time, speedups of 10x were generally
obtained through vectorization and performance up to 70 MFLOPS (millions
of floating point operations per second) was attained in the optimized
code.   This paper discusses the vectorization and performance
measurement techniques that were employed and the ways in which these
techniques may be applied to other AVS library modules as well as user
written modules.
---------------------------------------------------------------

An Application for Visualizing Molecular
Dynamics Data Developed under AVS/Express

Upul R. Obeysekare, Chas J. Williams, and Robert O. Rosenberg

Scientific Visualization Laboratory
Center for Computational Science
Information Technology Division
Naval Research Laboratory
Washington, DC 20375-5320

We discuss issues related to developing an application
for visualizing data from molecular dynamics simulations.
The application is developed using visual programming and
user-interface design features of the application development
environment of AVS/Express.  A suite of AVS5 modules (reported
in AVS '94) is being converted to AVS/Express objects that use
AVS/Express's new field data scheme.  Details related to
defining spheres to represent atoms under the new field data
scheme is being discussed.  Important features for molecular
dynamics visualization applications such as picking and highlighting
atoms are being tested using the new architecture of AVS/Express.
-----------------------------------------------------------------------

Speaker:  Walter Schmeing, VISTEC Software GmbH, Berlin, Germany

Title: Examining Large Datasets using DBFLD (Field Database)

Description:  DBFLD has been developed to examine very large datasets
              stored in compressed form saving a lot of diskspace.
              Smart readers access these compressed data with the ability
              of cropping, downsizing, interpolating, mirroring, stretching,
              slizing, etc. while reading, so you can handle really large
              datasets now without running out of memory.
              Interfaces (API's) are available to AVS5-Fields, AVS5-UCD's
              and AVS/Express-FLD's.
-----------------------------------------------------------------------

Dr S. A. Khaddaj
Dept. of Computer Science and Electronic Systems,
Kingston University,
Penrhyn Road,
Kingston upon Thames KT1 2EE.
email: s.khaddaj@king.ac.uk

Atitle An Interactive Design Tool for Scientific Simulations
S.A. Khaddaj

Department of Computer Science and Electronic Systems, Kingston University,
Kingston upon Thames KT1 2EE, United Kingdom

Abstract

This work is concerned with the development of an interactive graphical
environment for scientific simulations. However, as in our previous works
[1] [2], special attention is paid to the production, visualisation and
animation of the surface morphologies generated from simulation of a
material growth  technique known as molecular-beam epitaxy (MBE). We are
particularly concerned with the use of Application Visualisation System
(AVS) as the visual programming interface for the simulations. The
simulation/graphics/animation environment is described. New AVS modules
created for this specific application are discussed and results are
presented.

S. A. Khaddaj et al., `The Application of Graphics and Animation Techniques
for Large-Scale Simulation', in R. Grebe et al. (eds), Transputer Applications
and Systems 93, IOS Press, pp. 244-259, Amsterdam,1993.

S. A. Khaddaj et al., `AVS:  A Visualisation Environment for Atomic Arrangement
and Materials Design', AVS'94, Advanced Visual Systems, Inc., pp. 362-377,
Boston, Massachusetts, 1994.
-----------------------------------------------------------------------------

AVS/Express Case Study
Howard Watkins, John O'Sullivan, David Heath, Peter Yerburgh

Intera are in the process of integrating the AVS/Express architecture with
their own application framework based on C++ and XVT. Some issues of the
integration will be discussed and examples given of the power of the AVS/Express
approach.
-----------------------------------------------------------------------------

AVS5 and POSTGRES: Large Scale Data Analysis

Raymond E. Flanery Jr.
Director, Advanced Visualization Research Center
Mathematical Sciences Section
Oak Ridge National Laboratory
phone:(615)574-0630
fax  :(615)574-0680
email:flanery@msr.epm.ornl.gov

A statistical analysis and browsing application for large data sets
has been developed using AVS5 for the user interface and the public
domain dbms POSTGRES for the data storage.

The Large Scale Data Analysis project requires the use of statistical
techniques to extract useful or important information from datasets too
large for normal browsing techniques. The information filtered out of 
these datasets is stored using the POSTGRES dbms.

AVS5 is used to build the networks for, and allow interaction with, the
statistical filters used to extract the information. It is also used
to allow browsing of the resulting data bases.
-----------------------------------------------------------------------------

AVS/EXPRESS and PVM: Gas and Oil National Information Infrastructure
(GO-NII) Project

Raymond E. Flanery Jr.                             Dr. Bart D. Semeraro
Director, Advanced Visualization Research Center   Research Staff
Mathematical Sciences Section
Oak Ridge National Laboratory
phone:(615)574-0630                                (615)574-3130
fax  :(615)574-0680
email:flanery@msr.epm.ornl.gov                     semeraro@msr.epm.ornl.gov

We discuss the use of AVS/EXPRESS as an interface and rendering system
for parallelized visualization subroutines, utilizing PVM as the 
communication protocol.

The GO-NII project requires the visualization of seismic data sets which
are too large to be manipulated on a single graphics workstation. Typical
single shot data sets are ~500MB and a time series of seismic snapshots
would consist of hundreds of these data sets. 

We have implemented a few visualization subroutines with the PVM protocol
and interfaced these with AVS. This has allowed us to visualize seismic
data sets on a large heterogeneous network of SUN, SGI, DEC and HP
workstations.
-----------------------------------------------------------------------------

Creating a Scientific Environment with AVS/Express

Erin N. Thornton
Gary D. Black
Tom L. Keller
Karen L. Schuchart
Chance R. Younkin
Donald R. Jones

Environmental Molecular Sciences Laboratory
Pacific Northwest Laboratory(*)
Richland, Washington

Productive use of the advanced computational resources available to
molecular and environmental scientists requires not only a revolution in
computational methods, but also a corresponding revolution in the tools for
managing and analyzing computational experiments.  The Extensible
Computational Chemistry Environment (ECCE') is being developed at  Pacific
Northwest Laboratory to address these needs.  ECCE' is an integrated,
comprehensive environment for molecular modeling and simulation.  Key
components are application systems with graphical user interfaces,
chemistry-specific visualization applications, and scientific data
management.  The graphical user interface assists in the selection of
computational parameters, recasting database input queries in terms of
scientific requirements, and assisting the user via chemistry-specific help
facilities.  Visualization tools focus on graphical manipulation and
analysis of data from molecular modeling applications.  Seamless
integration of tools is accomplished through a common architecture for
molecular modeling applications that is based on an object-oriented
molecular data model.  The intent is to allow the execution of complex
computational "experiments," provide a framework for extending the
computational resources in the systems, and facilitate information sharing.
 ECCE' is an integration of AVS/Express, an object-oriented database, and
an external GUI Builder.  AVS/Express serves as the software framework.  We
discuss the issues of integrating products with AVS/Express and customizing
AVS/Express to meet the difficult design requirements of the system being
created.  The system utilizes an extensively modified AVS network editor.
We further discuss the use of a common chemistry data model, and its
implementation in AVS/Express.  Finally, we address the use of an external
GUI builder to create common interfaces to various chemistry applications,
and the strategy of integrating these interfaces into the AVS/Express
application.


(*)The Pacific Northwest Laboratory is a multiprogram national laboratory
operated for the U.S. Department of Energy by Battelle Memorial Institute
under Contract DE-AC06-76RL0 1830.
-----------------------------------------------------------------------------

                Developing The Next Generation Nuclear
               Medical Imaging System With AVS/Express

                          David A. Goughnour
                         Jeffrey A. Hallett,
                          ADAC Laboratories

ADAC Laboratories, THE CURRENT WORLD LEADER IN NUCLEAR MEDICAL
IMAGING, is currently developing its next-generation diagnostic imaging
workstation based on AVS/Express. Nuclear medical imaging is UNIQUE (( a
challenging application)) in the imaging world in two respects.
From a pure technology viewpoint,
nuclear imaging relies heavily on processing of raw image data; using
a variety of imaging operations, quantitative results are produced upon
which radiologists base diagnoses and treatment procedures for
cardiac, oncological, and other types of LIFE-THREATENING illnesses.
From the customer standpoint, 90% of the nuclear medicine market is
clinically-oriented. These users are generally not highly technical,
and are interested only in the aspects of the machine that make it
easy to use and allow them to produce the studies they need in the
shortest time possible. This paper describes how the various features
of AVS/Express are being leveraged by ADAC to not only address the raw
computational needs of an advanced nuclear medicine imaging system,
but also to produce a simple-to-use, but highly flexible, imaging
environment which will appeal to all types of nuclear medicine
providers from the base clinician to the advanced researcher.
-----------------------------------------------------------------------------

------------------------------------------------------
------------------CFD/ENGINEERING  TRACK--------------
------------------------------------------------------
Chairs:  Larry Schoof
------------------------------------------------------

Interactive Scientific Exploration of Tokamak Gyro-Landau Fluid Turbulence
In A Visual Immersion Environment

This work was supported by the USDOE at the Lawrence Livermore National
Laboratory under contract number W-7405-ENG-48.

G.D. KERBEL, J.L. MILOVICH, and D.E. SHUMAKER
National Energy Research Supercomputer Center
Lawrence Livermore National Laboratory,
Livermore, California 94550, USA
gdk@kerbel.nersc.gov

R.E. WALTZ
General Atomics, San Diego, California 92138-5608, USA, waltz@gav.gat.com}

A. VERLO
Electronic Visualization Laboratory, University of Illinois at Chicago,
Chicago, IL 60607-7053, USA, averlo@eecs.uic.edu}

ABSTRACT

High performance data parallel algorithms for simulating and visualizing
gyro-Landau fluid (GLF) tokamak turbulence in three dimensions have been
used in predictive transport scaling studies{WKM} in connection with the
Numerical Tokamak Project.

The time advancement algorithm is composed of data parallel block
structured matrix solvers and multiple instance fast Fourier transforms
used in operator splitting and pseudospectral convolution evaluation.  The
overall performance of the algorithm is optimized by choosing a sequence of
data arrangements such that the computation in each arrangement is local
and there is a fast data rearrangement algorithm setting the next stage of
the sequence.  The existence of fast communication kernels connecting
certain classes of arrangements helps to determine how to structure the
algorithm.

The complexity of 3D tokamak turbulence in toroidal geometry requires that
advanced visualization techniques be used to understand and communicate
results effectively.  We have developed an interactive distributed
visualization system to examine, compare and display physics results based
in part on the same high performance parallel computing and communications
algorithms used in the physics simulation.

The visualization system provides a wide range of flexibility for image
composition and animation, making it easy to select, navigate and frame
large data sets for further study and presentation.  The system also
enables the visual experience of concurrent viewing by collaborators, which
is helpful to debug algorithms and suggest meaningful diagnostics.

The distributed visualization system currently uses a CM5 as a remote
compute and storage server coordinated through CMAVS/AVS{TM} with options
for local display ranging from a standard X-server to a 3-D multi-screen
stereoscopic projection system using multiple SGI Reality Engine{TM}
processors (CAVE{EVL}).

In particular, introducing an AVS derived scientific visualization
interface into the CAVE environment enables interactive exploration of
dimensions of parameter space which might otherwise remain hidden or
inaccessible.

References
{EVL} CAVE is a virtual reality theatre created in the Electronic
Visualization Laboratory of the University of Illinois at Chicago.

{WKM} R.E. Waltz, G.D. Kerbel and J.L. Milovich, "Toroidal
Gyro-Landau Fluid Model Turbulence Simulations In A Nonlinear Ballooning Mode
Representation With Radial Modes", Phys. Plasmas 1, 2229 (1994).
-------------------------------------------------------------------

NEAR REAL-TIME INTEGRATION OF SUPERCOMPUTING AND RIG TESTS
THROUGH HETEROGENEOUS DISTRIBUTED ON-LINE COMPUTATION AND
ADVANCED VISUALIZATION SOFTWARE

David A. Clark, Aerospace Engineer,
Vehicle Propulsion Directorate
Army Research Laboratory

In most turbine engine testing facilities, the tools and techniques of
Computational Fluid Dynamics (CFD) and advanced visualization have never
been applied to facilitate (near) real-time analysis of the test hardware.
New computer software technology has now been applied which allows
Server-To-Client, Remote Procedure Calls (RPC), enabling supercomputers to
be called from within the on-line test scanning program.  Coupled with
advanced visualization software and graphics workstations, it is possible
to view the inside of a test while it is being conducted.  Such capability
can be as valuable to researchers in steering tests as X-rays are to doctors 
in diagnosing health.

Using this on-line system, a full turbomachine (compressor) has been visually 
analyzed by interpolating pressure and temperature instrument rakes to give a 
full flow-field view of the engine (compressor).  All data values at each grid 
cross-section are non dimensionalized at each grid cross section and viewed at 
varying ranges of iso-distortion surfaces.  Regions of low or high energy can 
be seen as they proceed through the compressor stages.  A full range of
capabilities are displayed for both temperature and pressure using computer 
animation techniques recorded to video.  Such views are unique and may provide
extra information to help understand inlet distortion as it relates to stall
margin.  If better stall management can be attained, compressors can be
operated safer and possibly be allowed to operate closer to "peak efficiency".
More efficient engines have a direct impact on operating costs and promise 
tangible monetary savings.

Secondly, CFD efforts are described in conjunction with the use of RPC to
supercomputers.  A velocity gradient meridional, quasi-3D solution to five
blade rows is shown in context with rotating blade rows and shown on video.

The value of the computer work is all generic and can be applied in almost
any scientific area where on-line computer systems are used.
------------------------------------------------------

Spectral Element Modeling of Seismic Wave Propagation

G. Padoan, A. Pregarz, and E. Priolo
Osscivatorio Geofisico Sparimentalc
tel: 39 40 2140260
padoan@gem755.ogs.trieste.it

Abstract:
The numerical simulation of the propagation of both acoustic and elastic
wave fields in hetereogeneous media is presented by an animated visualization.
The wave fields are computed numerically by the spectral element
method, according to a high-order finite element approach.  The elements
of the unstructured grid contain many edge and internal nodes, depending
on the order of the polynomial base; for instance, for a six order polynomial
each element has 45 nodes, of which 20 are edge nodes and 25 internal nodes.
In order to manage unstructured data, AVS has some pre-defined low-order
cells.  One way to represent by AVS the wave fields generated by high-order
methods based on unstructured grids is to define new high-order cells, and
to implement well suited graphics modules.

Advance:
This paper shows the results of a simulation of the propagation of both 
acoustic and elastic wave fields in a hetereogeneous media, where the
unstructured high-order grid data is handled by defining new high-order 
cells, and then implementing well-suited graphics modules for them.
------------------------------------------------------

------------------------------------------------------
--------------------RESEARCH TRACK--------------------
------------------------------------------------------
Chair:  Chuck Hansen
------------------------------------------------------

Real-Time Visual Control of Numerical Simulation

Upul Obeysekare, Fernando Grinstein, Gopal Patnaik,
and Chas Williams

Laboratory for Computational Physics and Fluid Dynamics
Naval Research Laboratory
Washington, DC 20375-5320

We address relevant issues and difficulties involved in the practical
implementation of real-time visualization with emphasis on
interactive control of numerical simulations.  Important issues governing
the implementation of this technique such as network data transfer
speeds, asynchronous module execution, architecture neutral implementation,
visual programming environment across heterogeneous computer architectures,
and user-interface design for visual control of the simulation are being
addressed.  Strategies to overcome difficulties associated with the
implementation of the concept are analyzed in the context of selected
numerical simulations implemented under AVS's heterogeneous
computing environment using remote Cray Supercomputers and local graphics
workstations.  Future requirements for collaborative visualization in this
environment is also being addressed.
-------------------------------------------------------------------

"Visualization and feature extraction in isotropic Navier-Stokes turbulence"

Victor M. Fernandez and Norman J. Zabusky
Department of Mechanical and Aerospace Engineering and CAIP Center,
Rutgers University, Piscataway, NJ 08855}

Smitha Bhat and Deborah Silver
Department of Electrical and Computer Engineering and CAIP Center,
Rutgers University, Piscataway, NJ 08855

Shi-Yi Chen
IBM T.J. Watson Research Center and
Theoretical Division and Center for Nonlinear Studies,
Los Alamos National Laboratory, Los Alamos, NM 87545


ADVANCE:
Feature extraction and data reduction algorithms enhance
physical understanding in visualization of Navier-Stokes turbulence and
also provide an efficient way to deal with large datasets.

ABSTRACT:
We present feature extraction and data reduction algorithms and
provide an insight in the types of problems that arise in dealing with
large datasets, obtained in Navier-Stokes turbulence simulations with a
512x512x512 mesh resolution. The developed tools are based on
thresholding, object segmentation and low order ellipsoidal
quantifications and are applied to the search of coherent vortex
structures associated with maxima events in the turbulence field.  We
underline the importance of sharing tasks between the supercomputer
(CM5) and the workstation (SGI Onyx), where each machine may work more
efficiently at different stages of the data processing.  We obtain
visualizations that show the structure of the dominant coherent
objects.  The reduced representations employed make it possible to
examine different types of fields for possible correlations. The
quantification of the objects identified by the feature extraction
algorithms, should contribute to the building of models that consider
both coherent structures and the random background observed in
Navier-Stokes turbulence.}
-------------------------------------------------------------------

Wes Bethel
LBL

Modular Virtual Reality Visualization Tools

While the debate continues over whether or not the term "Virtual
Reality" is an oxymoron or pleonasm,  those of us tasked with
developing or using visualization software more often than not must
eschew this largely philosophical debate and focus our attention on more
practical matters: usable software and scientific progress.

VR implementations range from fully immersive to desktop systems.  The
goal of each of these implementations is to provide a user interface in
which a human can interact with a computer model in a way which is
intuitive, "easy to use", interesting and engaging.  We describe
an approach to desktop VR geared towards the user of modular scientific
visualization systems which combine inexpensive yet practical VR input
devices with a methodology for using the data generated by these devices
in a variety of ways.  For example, a tracker could be used in one
context to position a viewpoint, but in another, to orient a slice plane.

Most VR devices generate information which is six-dimensional: position
and orientation.  Along with that information, many have one or more buttons
which can be used to generate boolean events.  The methodology presented
shows one way to manage this type of input, and capitalizes on the
strengths of modular visualization environments by allowing the user
freedom of choice about the way in which the VR input device data is used
in the visualization network.
-------------------------------------------------------------------

Quantitative Analysis of Reconstructed Rodent Embryos

Andy R. Haas, Richard A. Strelitz, Ph.D., William S. Branham, 
Daniel M. Sheehan, Ph.D.
System Analyst, R.O.W. Sciences, Inc. 
Division of Reproductive and Developmental Toxicology
National Center for Toxicological Research
Jefferson, Arkansas, U.S.A., 72079

ahaas@fdant.nctr.fda.gov
(501) 543-7011

Advance
Detailed insight into the development of rodent embryos is being achieved 
through quantitative volume analysis at National Center for Toxicological
Research.

Abstract
The Application Visualization System (AVS) provides an interactive rendering 
environment suitable to assess size (volume) changes in fetal biological 
systems.  Altered fetal growth is likely to result in birth defects in 
offspring.  However, early effects on growth are difficult to measure because 
of the small size of fetal organs.  Using laser scanning confocal microscopy, 
embryos can be "optically sectioned" to reveal organ structure.  Analysis of 
these optical sections, either singly or as reconstructed embryo objects, 
can detect early abnormal embryo growth.

AVS modules have been developed which model and annotate organ objects, 
support organ selection, and measure volumes.  Selecting an organ displays 
both an annotation describing the organ and the organ volume.  General purpose 
volume analysis is also obtained through a compute volume module.  Volume 
and surface area measurements may be obtained from any AVS surface geometry 
(e.g., isosurface output).  A slice module has also been developed for 
researchers to investigate complexities in an organ's surface structure such 
as folds or deep concavities.
--------------------------------------------------------------------------

Real Time MPP 3-D Volumetric Visualization: Medical Imaging on the 
        Cray T3D with AVS

        Wolfgang Kraske         Northrop Grumman Corp.
        Chris Asano             Cray Research Inc.

Abstract
A joint research effort of Northrop Grumman Corp. (NGC) and Cray Research Inc.
has produced the first real time implementation of the Advanced Visual System
(AVS) volumetric ray tracing algorithm, tracer, on a Cray T3D computer.  The
AVS ray tracer was further enhanced for clinical diagnositic therapeutic 
applications with unique NGC morphological tissue classification algorithms 
originally developed for military target acquistion applications.
This applications demonstrates a sustained .25 second
renderings of full color with transparency on 256**3 biomedical X-ray computed
tomography and magnetic resonance data sets.

Advance
We have integrated state of the art military target acquistion technology 
with AVS 3D visualization tools on a hetrogeneous computing environment
to achieve a real-time implementation of enhanced visualization and medical
tissue classification on the Cray T3D MPP.
-----------------------------------------------------------------------------

--------------------------------------------------
--------ENVIRONMENTAL/EARTH SCIENCES TRACK--------
--------------------------------------------------
Chairs:  Theresa Rhyne & Wes Bethel
--------------------------------------------------

AVS in Climate Research

Dr. Joachim Biercamp
Head of Visualization Group
Deutsches Klimarechenzentrum GmbH (DKRZ)
(German Climate Computing Center)
Bundesstr. 55
D-20146 Hamburg
Germany

The German Climate Computing Center (DKRZ) provides the computer power for
quantitative computation of complex climate processes with sophisticated,
realistic models. DKRZ also tries to provide the scientists with easy
access to visualization tools, helping them to improved insight into
the huge amount of data resulting from these computations. The paper will show
AVS5-visualizations of oceanographical, metorological and geophysical data.
Applications include the global warming, the global carbon cycle and
El Nino. We will also try to present first results obtained with AVS6.
-----------------------------------------------------------------------

CLIMATE SIMULATION STUDY III:
Supercomputing and Data Visualization

Philip C. Chen
Fujitsu America, Inc.
3055 Orchard Drive
San Jose, California  95134 U.S.A.
email:  pchen@fai.com


During the last two years, the Community Climate Model (CCM), a numerical
model developed and updated by the National Center for Atmospheric Research
(NCAR) scientists, has been ported to the Fujitsu VPX240 supercomputer.
Data generated by 3 versions of CCM were used by exploring data
visualization techniques suitable for climatological studies, and these
techniques were reported in the AVS '93 and '94 conferences.

Current research efforts involve using a supercomputing-visualization
facility for climate simulation and data visualization applications.  The
facility includes: a Fujitsu vector supercomputer VPX240 and a
vector-parallel-processing supercomputer VPP500, workstations, printers,
video recorders, a film recorder, and a scanner.  The supercomputers,
workstations, as well as video and graphics input/output devices are
connected by networking systems.  The supercomputers have been used mainly
for data generation, and workstations which are connected to supercomputers
have been used to visualize data.

The current version of CCM:  CCM2 has recently been ported to the Fujitsu
VPP500 supercomputer.  The VPP500 is a highly parallel, distributed  memory
supercomputer, and it includes multiple processing elements (PE's) with
each PE having 1.6 GFLOPS performance.  With many PE's, the VPP500 has a
proven record of achieving more than 100 GFLOPS.  It is expected that the
CCM2 will run better on VPP500.  A possibility of doing data visualization
while generating data with VPP500 is being investigated.

AVS macro-module networks were developed on a SGI workstation to study
basic climatological parameters and derived parameters.  The slowness of
AVS has been noted especially when time-animation and volume-visualization
are involved.  AVS compute-intensive remote modules have been ported to the
Fujitsu VPX-series computer at the University of Manchester, and they have
been used successfully with AVS modules installed in workstations.  In the
future, these AVS remote modules will be made available to the on-side
supercomputer.  At that time, AVS execution speeds of the supercomputer and
the workstation will be compared.
-----------------------------------------------------------------------

Methods of Constructing a 3D Geological Model from Scatter Data

Jennifer Horsman
Earth Sciences Division
Lawrence Berkeley Laboratory
1 Cyclotron Road
Berkeley, CA  94720
jlhorsman@lbl.gov

Wes Bethel
Information and Computing Sciences Division
Lawrence Berkeley Laboratory
1 Cyclotron Road
Berkeley, CA  94720
ewbethel@lbl.gov


Abstract
Most geoscience applications, such as assessment of an oil reservoir or 
hazardous waste site, require geological characterization of the site. 
Geological characterization involves analysis of spatial distributions of 
lithology, porosity, etc.. Geoscientists often rely on two-dimensional 
visualizations for analyzing geological data. Because of the complexity of the 
spatial relationships, however, we find that a three-dimensional model of 
geology is better suited for integration of many different types of data and 
provides a better representation of a site than a two-dimensional one. Being 
able to easily manipulate a large amount of heterogeneous data increases the 
level of interactivity by providing the geoscientist with the opportunity to 
detect and visually analyze spatial correlations and correlations between 
different types of data, and thus leads to an increased understanding of the 
data.

A three-dimensional model of geology is constructed from sample data 
obtained from field measurements, which is usually scattered. To create a 
volume model from scattered data, interpolation between points is required. 
The interpolation can be computed using one of several computational 
algorithms. Alternatively, a manual method may be employed, in which an 
interactive graphics device is used to input by hand the information that lies 
between the data points. For example, a mouse can be used to draw lines 
connecting data points with equal values. The combination of these two 
methods presents yet another approach.

In this study, we will compare selected methods of three-dimensional 
geological modelling. We used a flow-based visualization system (AVS) to 
construct the geological models computationally. Within this system, we used 
two modules, scat_3d and scatter_to_ucd, to interpolate scattered data. These 
modules will be compared with the combined manual and computational 
approach. To demonstrate this method, we used a geological modelling 
system.
-----------------------------------------------------------------------


-------------------------------------------------------------
--------------------------IMAGING TRACK----------------------
-------------------------------------------------------------
Chair:  Todd Rodgers
-------------------------------------------------------------

Maximum Detection Range of Target Edge
as a Function of Variable Precipitation and Cultural Obscurants

Clifford A. Paiva, MS
Research Branch, Countermeasures Division
Warfare Systems Department
Naval Surface Warfare Center
Dahlgren Division
(703) 663-4781

Problem
Performance of new generation precision guided munitions (PGM) and smart
weapons is often unpredictable and unreliable for the global range of
battlefield environments which include natural and hostile cultural
(man-made) countermeasures.  Field test results do not extrapolate well to
the full range of operational conditions.(1)  The primary inhibitory factor
for resolution of automatic target recognition (ATR) performance problems
has been the inability to quantitatively characterize target discrimination
algorithms which include backgrounds and countermeasures (TDB/C).   This
study is directed to addressing one of the more serious problems in ATR
algorithm performance:  efficient segmentation of imagery containing high
standard deviation (clutter noise) values, generated by variable
precipitation, turbulence and smoke.

Approach
Four precipitation rates (75,50,25, and 5 mm/hr) through turbulence,
humidity and obscurants were selected for a perturbation analysis, from
which available target edge intensities, and first and second order
statistics were obtained.  A high differential scattering cross section to
total cross section scattering from the segmented scene is assumed.  This
effectively reduces the grayscale intensity of the target (M60 Main Battle
Tank) and allows an assessment of target detection via morphological image
processing techniques.  The final edge intensities and statistics were then
available for automatic target recognition (ATR) geometric pattern
referencing, as well as correlation mapping routines.  Maximum detection
sensor-to-target range for variable rain rates, turbulence and obscurants,
were summarized.

Image processing techniques for this analysis included erosion/dilation and
open/close operations, as well as region-growing image segmentation.  Four
LM8 smoke grenades generated obscurants which moved normal to the
line-of-sight (LOS), and through the turbulence.  The imagery is real in
9-12 micron bandpass taken at the US Army Keweenaw Research Center.
Sensor-to-target range commences at 160 meters (maximum) and closes to 600
meters (minimum).  Pixels on the partially obscured extracted edges, are
then counted and plotted versus range and precipitation constants.
Statistics (stand deviation and kurtosis) are plotted.

Results

The results indicated that the number of pixels-on-edge (edgels) vary as a
function of changes in three independent variables:  (1) range; (2)
precipitation rate; and (3) position of obscurant (smoke) to the target.
Decreased target edge mean and increased standard deviation, as a function
of increased precipitation, resulted in reduced detection and
classification potential for automatic target recognition (ATR) algorithms.
Although some pattern referencing information is present for Marr-Hildreth
type zero-crossing edge detectors, segmentation operations were severely
stressed, and attempts to reduce high and low clutter frequencies were not
successful beyond 1000 meters sensor-to-target, particularly for the
partially obscured target.

Nevertheless, although the target intensity was significantly less than the
smoke intensity, due to the high target intensity gradients and low smoke
gradients, the final segmented, edged-enhanced scene, successfully revealed
only the target (M60 MBT) in ranges less than 1000 meters.  Smoke obscurant
was effectively filtered from imagery as the missile closed on the target.
Actual count of edgels (versus precipitation rate) indicated strong
dependence on obscurant cloud position relative to target, precipitation
rate, and range.  An eight minute video of the scenario is provided.  The
results gives some insight and perspective regarding maximum detection
ranges which may be expected in severe natural and cultural cluttered
environments.
-------------------------------------------------------------

A GENERALIZED CONVOLVER MODULE

Johan Wiklund and Hans Knutsson
Department of Electrical Engineering
Linkoeping University
S-581 83 Linkoeping
SWEDEN

A procedure to perform convolutions on multi-dimensional data with
arbitrary filter kernels is a basic tool in image and signal
processing. Typical input data are 1D signals, 2D images, 3D volumes,
3D spatio-temporal image sequences and 4D volume sequences. Each
coordinate, (pixel, voxel, toxel), can contain a scalar value or a
vector.

If the kernel and/or the input data has a vector length larger than
one, a generalized convolution is needed. In this case the
multiplication in the convolution is changed to a vector combination.
Examples of vector valued input data are color (RGB), 2D vector
fields, 3D vector fields, tensor fields etc.

A scheme for performing generalized convolutions is presented. A
flexible convolver, which runs on standard workstations, has been
implemented. It is designed for maximum throughput and flexibility.
The implementation incorporates spatio-temporal convolutions with
configurable vector combinations. It can handle general multilinear
operations, i.e. tensor operations on multidimensional data of any
order. The input data and the kernel coefficients can be of arbitrary
vector length.

The kernel coefficients can be scattered, i.e they don't need to be
uniformly placed inside a box. The computational cost increases
lineary with the number of kernel coefficients, it does not depend on
the size of the kernel bounding box. A region of interest (ROI),
e.g. a spatial rectangle in the input over which the convolution
should be applied, can be defined. Subsampling is user selectable and
decreases the computational cost for the convolution.

There are two basic classes of filters, FIR (finite impulse response)
and IIR (infinite impulse response). Both types of filters are
supported by the convolver, it is configurable for IIR filters in the
time dimension.

The implementation is done as a C-library and a graphical user
interface in AVS (Application Visualization System).
-----------------------------------------------------------------------

Marquess Lewis (melewis@tasc.com)
TASC
55 Walkers Brook Drive
Reading, MA  01867
(617)942-2000

ABSTRACT
                   Radiometric and Geometric Adjustment of
                         Airborne Spectrometer Data
                         within the AVS Environment

                             Marquess E. Lewis
                  MTS, Signal and Image Technology Division
               TASC, 55 Walkers Brook Drive, Reading MA  01867

Multispectral and hyperspectral imagery collected from airborne platforms is 
finding increasing use in mineralogical, agricultural, and oceanic studies.  
Prior to application specific analyses, these data must be corrected for 
geometric distortions induced by the instrument optics and collection geometry 
and the radiometric response characteristics of the sensor.  TASC has recently 
fielded a ground station for the processing and analysis of airborne spectrometer 
data.  A component of this ground station allows scientists to interactively 
perform "what if" kinds of analyses using the AVS environment with a large set 
of custom modules.  With in the AVS environment radiometrically and geometrically 
adjusted data may also be produced.  This paper describes the overall TASC MIDAS 
mission and then focuses on the data normalization process as implemented within 
AVS.  Results using recently collected imagery will be shown.
-----------------------------------------------------------------------

TITLE:
Image Processing in the Spatial and Frequency Domains

Stephen L. Schultz
System Programmer, Center for Imaging Science
Rochester Institute of Technology
One Lomb Memorial Drive
Rochester, New York  14623
(716) 475-5294, slspci@rit.edu

ABSTRACT:
The Center for Imaging Science uses AVS to bridge the gap between the
numerous imaging and remote sensing packages utilized at the Center.
In addition, the Center's library of AVS modules provides additional
functionality missing in the various packages, the most notable of
which is frequency domain image processing.

AVS allows the researchers at the Center to examine the feasibility of
applying various spatial or frequency domain solutions to image processing
problems with a minimal amount of time or effort.  The Center's module
library includes support for most of the image storage formats used by
the various packages in use.

This presentation will demonstrate some of this functionality along with
the considerations used in developing the module library.  Concrete examples
from research projects conducted by the Center will be included.


ADVANCE:
A demonstration of the AVS modules developed at the RIT Center for Imaging
Science used to bridge the gap between the numerous imaging and remote sensing
packages utilized at the Center along with research project examples and
considerations taken in developing the modules.
-----------------------------------------------------------------------

--------------------------------------------------------
--------------COMMERCIAL TRACK--------------------------
--------------------------------------------------------
Chair:  Graham Walker
--------------------------------------------------------

CHALLENGES IN INFORMATION VISUALISATION

Dr Graham Walker
graham.walker@bt-sys.bt.co.uk
Advanced Applications and Technologies
BT Laboratories
Martlesham Heath
Ipswich IP5 7RE
UK

We are surrounded by an ever-growing, ever-chaning world of data.  However,
the value of this data is not intrinsic, but lies in enabling us to make
more informed decisions and in increasing our shared knowledge and
understanding.  In this paper, we explore the critical role for visualisation
in bridging the gap between the abstract analytical world of data and the
digital computer, and the real, analogue world of human problems and
experience.  Our discussion is structured around four Challenges, derived
from current trends in data and business practice: a growing volume of data
with declining information content, more complex data analysis tools and
models; increasingly abstract data tools and models; and a wider, less
specialised audience.  We illustrate our comments with examples from worl
on visualisation of telecommunications data at BT Laboratories.  The majority
of these examples have been developed using AVS.
-----------------------------------------------------------------------

Using AVS as a virtualization tool for WWW

Jeff Wang
MCNC/North Carolina Supercomputing Center
3021 Cornwallis Road
Research Triangle Park, NC 27709
jfwang@robin.ncsc.org

The World Wide Web (WWW) and internet information technology have added a
new dimension to the application area of AVS.  The lack of visualization
capability of the WWW tools can be compensated by using AVS.  The images
generated by AVS can be browsed and archived by WWW.

This paper presents a prototype of using NCSA Mosaic to run AVS for
demostration purpose.  AVS can generate both GIF format image and MPEG data
file, NCSA Mosaic and the multimedia viewers can access these image files and
produce visualization results in nice quality.  The new version of NCSA Mosaic
has provided a common client interface (CCI) that allows more interactive AVS
image browsing, and it opens up new potential application areas such as image
on demand, online tutorial, and collaboratory in AVS development.  The paper
also evaluates the efficiency and pitfall of the connection between AVS and
WWW.
-----------------------------------------------------------------------

An AVS Interface to the Aurora(TM) Dataserver


                          Charles Falkenberg
                    Department of Computer Science
                        University of Maryland
                           College Park, MD

                            Mike Achenbach
                          Xidak Corporation
                            Palo Alto, CA

                            Ravi Kulkarni
                  Advanced Visualization Laboratory
                        University of Maryland
                           College Park, MD

                            Vince Patrick
                 Prince William Sound Science Center
                             Cordova, AK

Abstract:

In this paper we present a framework for integrating AVS with a
scientific database management system consisting of the Aurora
Dataserver from Xidak.  The Aurora Dataserver includes a wide range of
database functionality for spatially coordinated data. This
functionality includes SQL queries, a high level data
model(ie. netCDF), historical meta-data, and transaction management in
a client server architecture. Our long term objective is to use AVS
and the Aurora Dataserver to form an integrated view of the Prince
William Sound (PWS) ecosystem in Alaska. The datasets used with the
Aurora Dataserver are taken from a diverse collection of oceanographic
and biological data from the PWS Ecosystem Assessment project (SEA)
undertaken as a result of the Exxon Valdez oil spill. We envision
using the Aurora Dataserver for temporal and spatial queries and AVS
networks to help test different hypothesis, models, and compare
different datasets. Our initial efforts have been the development of
two modules which provide read/write mapping from AVS fields to Aurora
datasets as well as allowing SQL queries, sub-sampling, and region
extraction. We will describe the functionality of the Aurora
Dataserver and its underlying data model, the interface to our AVS
modules, and the longer term enhancements necessary for support of the
SEA project.
-----------------------------------------------------------------------

Wayne Haidle
Montana-Dakota Utilities Co.

Correlating Time-Based Power System Parameters via Preprocessing
UNIGRAPH Command File Templates

The point and click capability of UNIGRAPH is adept at producing a command
file to replicate a given interactive process.  However, the command file
is not flexible in adapting to a variety of batch process interrogations.
A command file preprocessor can be utilized to overcome this limitation via
creating a unique command file from the command file template.
-----------------------------------------------------------------------------

------------------------------------------------------------
-------------------OIL & GAS TRACK--------------------------
------------------------------------------------------------
Chairs:  Mike Ray & Annette Walsh
------------------------------------------------------------

Title:          Visualization of Multi-component Saturation Distributions
                in Oil Reservoir Models

Authors:        Dong Ju Choi and Mitchell Roth
                Arctic Region Supercomputing Center
                University of Alaska
                Fairbanks, AK 99775-6020
                907-474-6307
                fax: 907-474-5494
                email: choi@arsc.edu and roth@arsc.edu

Abstract:
Sophisticated oil reservoir models simulate pressures and saturations
for dozens of hydrocarbon components in a 3D geometry.  Visualizing the
composition of the components is difficult using the standard AVS color table
when more than two components are present.  For multi-component models
a transformation of the standard color table is necessary.  One approach
is to use a color table built from a regular polygon, where each vertex
represents a pure component.

In the current study, a triangular color table is employed to display
oil, gas, and water saturations resulting from the 3D time dependent
simulation of an oil field employing gas injection for enhanced recovery.
This paper will show the process of converting the triangular color
table into an AVS color table and its application to the visualization
of oil reservoir models.
-----------------------------------------------------------------------

ABSTRACT
Use of AVS in the Uncertainty Analysis of the Depth Structure of an Idrocarbon
Reservoir

Paolo Ruffo, Livia Bazzana, Ernesto Della Rossa, Rita Colombo
Senior Professional, Integrated Interpretative Applications Dept.
AGIP  SpA
Email: ruffo@agip.geis.com

In the field of the depth conversion of seismic time maps, AGIP has developed a
methodology that allows the evaluation of the uncertainty of the depth model.
This methodology was translated into a software package (called GEODE) that we
created using AVS as a development tool.
This allowed us to obtain several advantages, in fact the final package is:
user friendly, easy to maintain, integrated in our processing flow and enough
flexible to be fitted to variable requirements and to future modifications.
The conventional depth conversion technique produces a unique depth map
starting from a seismic time map and a seismic velocity map.
In the GEODE approach the conventional unique depth map is interpreted as an
average result, that is: its difference with the unknown reality is in average
zero.

The basic principle applied in GEODE is to start from the knowledge that the
seismic velocities are uncertain and to apply a geostatistical technique to
evaluate the effect of the velocity uncertainty on the depth uncertainty.
In a way similar to that of the classical montecarlo technique, we produced a
number of geostatistically simulated velocity fields, all compatible with all
input velocity data, their uncertainty and their spatial correlation.
Using all these simulated velocity field in the depth conversion process we
produced several simulated depth models, instead of just one, as with a
conventional approach.

The final analysis and quantification of the uncertainty of the depth model is
based on an AVS interactive graphical interface, so that the user may easily
synthesise and understand the characteristics of depth stochastic model.
Obviously one of the result of this approach is that we can produce a
distribution of volumes (instead that just one number) and use it in the
process of economic evaluation of the Reservoir.

ADVANCE
The use of AVS modularity improved the realisation of a package that allows to
estimate and analyse interactively the uncertainty of the depth model and of
the volume of a Potential Reservoir.
-----------------------------------------------------------------------

Title:  Developing an Oil Reservoir Simulator Post-processor Using
	agX/Toolmaster & UIM/X

Speaker:        Dr. David Pottinger,
        Principal Software Engineer,
        Consultancy Services,
        AEA Technology
        Dorchester
        Dorset, UK
        DT2 8DH

email:  david.pottinger@aea.orgn.uk
tel:    (+44) 1305 202896
fax:    (+44)1305 202110

Abstract:
An application has been developed by AEA Technology Consultancy Services to
fulfill a requirement to visualize the results of reservoir simulations.  The
emphasis has been on the day-to-day requirements of engineers, namely to 
provide line chart, contour and cross-section displays.  Whilst many current 
applications provide high-end three dimensional visualization, a lack of 
tools able to support such displays has been identified.  agX/Toolmaster was 
chosen as the most appropriate graphics library due to its range of
functionality, speed, quality of displays and ease of use, as well as the
range of hardcopy devices supported.

The application was developed in two distinct parts - the graphical user
interface, using UIM/X, and the functionality, using agX/Toolmaster.  
This allowed the interface and the functionality to be designed, 
implemented and tested independently in parts.  Communication between the 
two parts was achieved using a command language - = the interface generated
commands which were passed to a command handler and processed accordingly.  
This approach allows the application to run both interactively, driven by the
interface, and in background, driven by a command file.  The use of Motif
and Toolmaster ensures that the application is easily ported to a range of 
platforms.

Using UIM/X allowed the development of prototype interfaces which users
commented on prior to final implementation.  The range of options provided 
by Toolmaster allowed the images to be easily customized through a range 
of dialog options.  The design of the dialogs has been aimed at providing 
providing sensible initial default settings, ease of use.  Extensive use of 
resources allows both the interface and the command language to be customized 
for overseas markets.

Advance:
A presentation of techVISION - an application developed using agX/Toolmaster 
for visualisation of oil reservoir simulation results, and UIM/X to provide a
powerful and easy to use graphical user interface.
----------------------------------------------------------------------- 

------------------------------------------------------------
-----------------------MEDICAL TRACK------------------------
------------------------------------------------------------
Chair:  Marc Kessler
------------------------------------------------------------

Loyd Myers
Dept. of Biological Structure,
University of Washington
(206) 543-5480
myers@biostr.washington.edu

VOLUME RENDERING OF MULTI-SPECTRAL MR DATA FOR BRAIN LANGUAGE SITE MAPPING

             Loyd M. Myers, Jeff Prothero, James F. Brinkley


We describe a technique for visualizing cortical anatomy in relationship to
arteries and veins in order to map language sites in a 3D coordinate system.
The technique requires three sets of MRI data, optimized for the three tissue
types, to be acquired pre-operatively, registered, combined, and finally rendered
in a single image.   

The three datasets are registered from machine coordinate information contained
in the MRI headers.  Semi-automated segmentation, using an adaptive region growing
algorithm followed by morphological dilation, is used to produce a rendering mask.
The three data sets are then masked, combined, and rendered to produce the final image.

The technique was applied to three patients, and the rendered results compared to
intraoperative photographs of the cortical surface.  In each case, the corrolation
of landmarks between the photographs and the rendered images was sufficient to allow
a neurosurgeon to correctly locate the language sites in the rendered image.

This technique is impemented by a combination of native AVS modules, AVS-based user
applications, and in-house code which is accessed as a server through AVS client
modules.  This approach has allowed us to develop a complex heterogenous application
from pieces written by programmers working independantly in different programming
environments.


ADVANCE:
A technique for visualizing cortical anatomy in relationship to arteries and
veins from MR data in order to map language sites in the human brain is described.   

-----------------------------------------------------------------------------


Ted Beatie
Mass General
Center for Imaging and Pharm Research
Charleston, MA
617-726-7834
email tcb@cipr.mgh.harvard.edu

(title only for now)
Comparison of 2D and 3D segmentation techniques for volume determination in
CT phantoms


-----------------------------------------------------------------------------

An AVS-based System for Optimization of Conformal Radiotherapy Treatment Plans

J.J. Kim, M.L. Kessler, N. Dogan, and D.L. McShan
The University of Michigan, Ann Arbor, MI 48109

While it is now possible using computer-controlled treatment machines to employ
larger numbers of fields to better conform dose to a target volume, current
interactive planning tools are not effective for handling the increased degrees
of freedom.  Therefore, some level of computer-aided optimization is needed.
Towards this goal, we have developed an AVS-based system which allows the
treatment planner to rapidly specify a range of possible beam orientations and
weights and to interactively construct cost functions to be optimized and
constraints to be satisfied.  Examples of optimization criteria include
biophysically motivated cost functions using normal tissue complication and
tumor control probabilities. Once the necessary information is given, an
optimization is begun and intermediate results, such as cost history, beam
weights, and dose-volume histograms are displayed.  At any time during an
optimization, the process can be interrupted, any parameter or criteria
modified,and then restarted.  By considering only dose points within the
tissues under consideration, it takes only a few minutes to perform thousands
of iterations of beam weightings and cost evaluations.  The blend of automated
and interactive optimization allows the treatment planner to manage the large
number or degrees of freedom possible.  This presentation will describe the
details of this system and provide examples of its use.

This work was supported in part by NIH grant PO1-CA59827.
-----------------------------------------------------------------------------

Chris Siegel
NYU School of Medicine
212-263-5744

Creating 3D Models from Medical Data using AVS.

Because of its easily extended set of processing modules, AVS has grown
to be a very powerful tool for the extraction of object models from
medical images.  However, not all of the challenges in the task of accurately
isolating the objects of interest without human intervention have been met.
I will explain what I have been able to accomplish using AVS as a processor
for CT and MRI patient data, and what stumbling blocks and brick walls I
have come up against.
-----------------------------------------------------------------------------

