CT&T Expertise Program
To maintain its usefulness this site will need to be continually updated so any corrections or comments from users are welcome and should be sent to firstname.lastname@example.org.
- Programming Support Tools
- Numerical Algorithms and Maths Subroutine Libraries
- Grid Projects and Tools for Cooperative Work
- Visualization and VR
- Large Scale Data Management
- A glossary of
high performance computing terms from
NPAC at Syracuse.
- PTLib, Parallel Tools Library
is an Internet source of information about high quality parallel systems software and tools, both research and commercial.
PTLIB provides :
- Cataloging and evaluation of parallel systems software and tools
- Distribution of software where permitted by authors
- Help with obtaining and installing software
- Communication between software authors and user community
- Parallel debuggers and performance analyzers
- Communication libraries
- Compiler technology
- Parallel I/O
- Distributed processing tools.
Of particular interest from this site is a list of execution and performance analyzers including debuggers in the software catalogue.
- The Parallel Tools Consortium
(Ptools) is a special-interest group that brings together tool users, developers, and researchers with the goal of improving the usability and availability of parallel tools.
Ptools has three primary roles:
- Ptools provides a forum for interactions involving tool users, developers,
- Creates opportunities for dialog between tool users and tool developers to identify user needs and how tools can be made more responsive to them
- Promotes discussion and technical exchanges among tool researchers and developers from different organizations
- Ptools promotes the development and dissemination of usable tools
- Encourages and facilitates projects to develop parallel tools that respond to particular user needs and can be made freely available on multiple computer platforms
- Assists the dissemination of parallel tools by publicizing information on their availability
- Ptools serves as a liaison with other special-interest groups and standards efforts
- Provides input on behalf of tool users and developers to groups defining standards that relate directly or indirectly to parallel tools
- Communicates information about standards and other developments of interest to tool users, developers, or researchers
- Ptools provides a forum for interactions involving tool users, developers, and researchers
- The DOE ACTS Collection
The DOE (U.S. Dept of Energy) ACTS (Advanced CompuTational Software) Collection is a set of DOE-developed software tools that make it easier for programmers to write high performance scientific applications for parallel computers.
The tools fall into four categories, numerical tools, tools for code development, tools for code execution and tools for library development. Several of them are referred to individually in later sections of this document.
ParaGraph is a graphical display system for visualizing the behavior and performance of parallel programs on message-passing parallel computers. It takes as input execution trace data provided by PICL (Portable Instrumented Communication Library), developed at Oak Ridge National Laboratory and available from netlib. PICL optionally produces an execution trace during an actual run of a parallel program on a message-passing machine, and the resulting trace data can then be replayed pictorially with ParaGraph to display a dynamic, graphical depiction of the behavior of the parallel program. ParaGraph provides several distinct visual perspectives from which to view processor utilization, communication traffic, and other performance data in an attempt to gain insights that might be missed by any single view.
- Paradyn Parallel Performance Tool
Paradyn is a tool for measuring the performance of large-scale parallel programs. The goal is to provide detailed, flexible performance information without incurring the space and time overhead typically associated with trace-based tools. Paradyn achieves this goal by dynamically instrumenting the application and automatically controlling the instrumentation in search of performance problems. Paradyn also provides decision support for the user by helping to decide when and where to insert and explaining performance bottlenecks using descriptions and visualizations. Paradyn maps performance data to multiple layers of abstraction, and the user can choose to look at it in terms of high-level language constructs or low-level machine structures.
Nupshot is a performance visualization tools that displays performance data trace files in a log format (MPICH MPE logging) or PICL format. Nupshot has two views -- a Timeline view that shows process states and message passing and a Mountain Ranges view that shows histograms of statistics for the various MPI calls.
SCIRun Computational Steering Software System
SCIRun is a scientific programming environment that allows the interactive construction, debugging, and steering of large-scale scientific computations. Using this ``computational workbench,'' a scientist can design and modify simulations interactively via a dataflow programming model. SCIRun enables scientists to design and modify model geometry, interactively change simulation parameters and boundary conditions, and interactively visualize geometric models and simulation results. SCIRun plays several roles as a computational tool (e.g. resource manager, thread scheduler, development environment), and is an application of object oriented design (implemented in C++) to the scientific computing process.
- Classdesc 1.0
Classdesc is a system for adding reflection to C++, ie the ability to query an object's structure at runtime. This is different from run time type enquiry (RTTI), which merely returns a unique signature for an object's type.
It consists of a preprocessor that parses class definitions, and outputs definitions of an overloaded function (called an action) that is recursively called on the members. By defining the action on basic types such as int, floats, chars etc, the action can be called on any arbitrary type.
The classdesc distribution comes with an action called pack defined, which performs serialisation on objects (packing an object into a binary representation, which may optionally be machine independent).
ClassdescMP builds on the serialisation capability of Classdesc to provide an easy to use MPI programming environment for C++ users.
- NETLIB public domain
collection of mathematical software, papers, and databases.
HPC-Netlib is a high performance branch of the Netlib mathematical software repository. HPC-Netlib provides information about high performance mathematical software, both research and commercial, as well as a roadmap to software selection and performance issues.
HPC Netlib provides :
- Cataloging and evaluation of high performance math software
- Distribution of software where permitted by authors
- Help with obtaining and installing software
- Communication between software authors and user community
The ScaLAPACK (or Scalable LAPACK) library includes a subset of LAPACK routines redesigned for distributed memory MIMD parallel computers. It is currently written in a Single-Program-Multiple-Data style using explicit message passing for interprocessor communication. It assumes matrices are laid out in a two-dimensional block cyclic decomposition.
ScaLAPACK is designed for heterogeneous computing and is portable on any computer that supports MPI or PVM.
Like LAPACK, the ScaLAPACK routines are based on block-partitioned algorithms in order to minimize the frequency of data movement between different levels of the memory hierarchy. (For such machines, the memory hierarchy includes the off-processor memory of other processors, in addition to the hierarchy of registers, cache, and local memory on each processor.) The fundamental building blocks of the ScaLAPACK library are distributed memory versions (PBLAS) of the Level 1, 2 and 3 BLAS, and a set of Basic Linear Algebra Communication Subprograms (BLACS) for communication tasks that arise frequently in parallel linear algebra computations. In the ScaLAPACK routines, all interprocessor communication occurs within the PBLAS and the BLACS. One of the design goals of ScaLAPACK was to have the ScaLAPACK routines resemble their LAPACK equivalents as much as possible.
- PLAPACK, Parallel Linear
PLAPACK is a library infrastructure for the parallel implementation of linear algebra algorithms and applications on distributed memory supercomputers such as the Intel Paragon, IBM SP2, Cray T3D/T3E, SGI PowerChallenge, and Convex Exemplar. The object based infrastructure allows library developers, scientists, and engineers to exploit a natural approach to encoding so-called blocked algorithms, which achieve high performance by operating on submatrices and subvectors. This feature, as well as the use of an alternative, more application-centric approach to data distribution, sets PLAPACK apart from other parallel linear algebra libraries, allowing for strong performance and significantly less programming by the user.
PLAPACK uses MPI for parallelism and includes many dense linear algebra algorithms.
A list of links to various sites providing code for specific algorithms, code for different applications, training courses etc.
- Software Summary
A list of freely available software for linear algebra on the Web.
This includes codes written in F77, C and C++ and covers dense linear algebra, sparse direct and iterative system solvers as well as sparse iterative eigenvalue problems. Packages are categorised according to whether they can be used for real and/or complex data, programming language, which computer architectures they can be run on and the nature of the problem.
- Numerical Recipes On-line
The complete Numerical Recipes books in C, Fortran 77, and Fortran 90 On-Line, in both PostScript and Adobe Acrobat formats.
- GAMS Guide to Available Mathematical Software
A cross-index and virtual repository of mathematical and statistical software components of use in computational science and engineering.
With more than 600 mathematical, statistical, and engineering functions, MATLAB provides immediate access to high-performance numerical computing. This functionality is extended with interactive graphical capabilities for creating plots, images, surfaces, and volumetric representations.
Leading-edge toolbox algorithms enhance MATLAB's functionality in domains such as signal and image processing, data analysis and statistics, mathematical modeling, and control design. Toolboxes are collections of algorithms, written by experts in their fields, that provide application-specific numerical, analysis, and graphical capabilities.
- Numerical Algorithms Group(NAG)
NAG provides numerical and statistical libraries in Fortran77, Fortran90, and C as well as an SMP library for shared memory machines and a parallel library for distributed memory machines.
from Visual Numerics.
IMSL provides a large collection of mathematical and statistical functions in libraries written in Fortran77, Fortran90, C and Java as well as for distributed memory using MPI.
Mathematica is a fully integrated technical computing system which combines powerful computing software with a convenient user interface. Mathematica's notebook format allows for the generation of cross-platform, fully customizable files that provide professional mathematical typesetting and publication-quality layout of electronic and printed media. Mathematica's features include symbolic and numeric computation, 2D and 3D data visualization, broad programming capabilities, and one-step creation of web documents. The Mathematica package can be used as a direct calculation tool or as a powerful modeling and simulation tool.
- ATLAS (Automatically Tuned Linear Algebra Software)
The ATLAS (Automatically Tuned Linear Algebra Software) project is an ongoing research effort focusing on applying empirical techniques in order to provide portable performance. At present, it provides C and Fortran77 interfaces to a portably efficient BLAS implementation, as well as a few routines from LAPACK. For all supported operations, ATLAS achieves performance on par with machine-specific tuned libraries.
- Aztec (A Massively Parallel Iterative Solver
Library for Solving Sparse Linear Systems.)
Aztec is a parallel iterative library for solving linear systems, Ax=b, which is both easy-to-use and efficient. Simplicity is attained using the notion of a global distributed matrix. The global distributed matrix allows a user to specify pieces (different rows for different processors) of the application matrix exactly as it would be in the serial setting (i.e. using a global numbering scheme). Issues such as local numbering, ghost variables, and messages are ignored by the user and are instead computed by an automated transformation function. Efficiency is achieved using standard distributed memory techniques; locally numbered submatrices, ghost variables, and message information computed by the transformation function are maintained by each processor so that local calculations and communication of data dependencies is fast. Additionally, Aztec takes advantage of advanced partitioning techniques and utilizes efficient dense matrix algorithms when solving block sparse matrices.
Methods (Krylov Iterative): CG, CGS, BiCGSTAB, GMRES, TFQMR
Preconditioners: Point & block Jacobi, Gauss-Seidel, least-squares polynomials, and overlapping domain decomposition using sparse LU, ILU, BILU within domains.
Although the matrix A can be general, Aztec was designed to cope with matrices arising from the approximation of partial differential equations. Aztec is written in ANSI standard C, uses MPI for parallelism and is distributed along with technical documentation, example C and Fortran drivers and sample input files.
PETSc is a suite of data structures and routines for the scalable (parallel) solution of scientific applications modeled by partial differential equations. It employs the MPI standard for all message-passing communication. PETSc is intended for use in large-scale application projects, and several ongoing computational science projects are built around the PETSc libraries. With strict attention to component interoperability, PETSc facilitates the integration of independently developed application modules, which often most naturally employ different coding styles and data structures.
PETSc is easy to use for beginners. Moreover, its careful design allows advanced users to have detailed control over the solution process. PETSc includes an expanding suite of parallel linear and nonlinear equation solvers that are easily used in application codes written in C, C++, and Fortran. PETSc provides many of the mechanisms needed within parallel application codes, such as simple parallel matrix and vector assembly routines that allow the overlap of communication and computation. In addition, PETSc includes growing support for distributed arrays.
SuperLU is a general purpose library for the direct solution of large, sparse, nonsymmetric systems of linear equations on high performance machines. The library is written in C and is callable from either C or Fortran.
The library routines will perform an LU decomposition with partial pivoting and triangular system solves through forward and back substitution. The LU factorization routines can handle non-square matrices but the triangular solves are performed only for square matrices. The matrix columns may be preordered (before factorization) either through library or user supplied routines. This preordering for sparsity is completely separate from the factorization. Working precision iterative refinement subroutines are provided for improved backward stability. Routines are also provided to equilibrate the system, estimate the condition number, calculate the relative backward error, and estimate error bounds for the refined solutions.
SuperLU package comes in three different flavors:
- SuperLU for sequential machines
- SuperLU_MT for shared memory parallel machines using pthreads
- SuperLU_DIST for distributed memory using MPI
Internet Finite Element Resources
This web site lists public domain and shareware programs plus pointers to commercial packages for solving finite element problems.
PDELab is a web based problem solving environment for modeling physical objects described by Partial Differential Equations (PDEs). The user defines the PDE problem using a graphical interface and selects the solution method. PDELab then provides the computational resources to solve the PDE and visualize the solution. PDELab is based on Parallel Ellpack //ELLPACK which is an extension of the ELLPACK system for solving elliptic boundary value problems.
The PDELab system solves certain classes of Partial Differential Equations (PDEs) on sequential and parallel platforms. PDELab provides an interactive graphical user interface for specifying the PDE model, selecting the solution method, solving the PDE problem, and visualizing the output. PDELab is supported by the MAXIMA symbolic system and well-known solver libraries. Many PDE solvers have been integrated into PDELab, resulting in a system that can solve a broad range of 1-D, 2-D and 3-D systems of PDEs.
A PDELab problem is defined in terms of the PDE objects involved: equations, domains, boundary and initial conditions, solution strategy, output requirements. The textual representation of the PDE objects and its syntax comprise PDELab's high-level language. The language is generated by the tools of GUI and loaded into the PDELab execution environment, where it is transformed by the language processor into a Fortran driver program, and then compiled, linked against the libraries, and executed.
- NEOS (Networked Optimization)
The NEOS server is available to any user to submit an optimization problem from their local workstation. The problem must be in the correct input form for the chosen solver. Optimization problems are solved automatically with minimal input from the user. Users only need a definition of the optimization problem; all additional information required by the optimization solver is determined automatically.
Also included is an Optimization Software Guide which lists available software for solving different categories of optimization problems.
NetSolve is a client-server system that enables users to solve complex scientific problems remotely. The system allows users to access both hardware and software computational resources distributed across a network. NetSolve searches for computational resources on a network, chooses the best one available, and using retry for fault-tolerance solves a problem, and returns the answers to the user. A load-balancing policy is used by the NetSolve system to ensure good performance by enabling the system to use the computational resources available as efficiently as possible. The framework is based on the premise that distributed computations involve resources, processes, data, and users, and that secure yet flexible mechanisms for cooperation and communication between these entities is the key to metacomputing infrastructures.
Interfaces in Fortran, C, Matlab, and Mathematica have been designed and implemented which enable users to access and use NetSolve more easily. An agent based design has been implemented to ensure efficient use of system resources.
One of the key characteristics of any software system is versatility. In order to ensure the success of NetSolve, the system has been designed to incorporate any piece of software with relative ease. There are no restrictions on the type of software that can be integrated into the system.
Snark is a finite element method (FEM) package armed with state of the art particle-in-cell (PIC) and algebraic multigrid (AMG) techniques, and aims to solve partial differential equations (PDE) for field problems. Coded using C/C++, Snark supports scalable parallel execution and is portable. Therefore, Snark can be employed to handle large problem systems. Snark is a VPAC in-house research and collaboration project.
pLAB is a server devoted to the theory and practice of random number generation. It contains links to literature and software in this area.
The Access Grid (AG) is the ensemble of resources that can be used to support human interaction across the grid. It consists of multimedia display, presentation and interactions environments, interfaces to grid middleware, interfaces to visualization environments. The Access Grid will support large-scale distributed meetings, collaborative work sessions, seminars, lectures, tutorials and training. The Access Grid design point is group to group communication (thus differentiating it from desktop to desktop based tools that focus on individual communication). The Access Grid environment must enable both formal and informal group interactions. Large-format displays integrated with intelligent or active meeting rooms are a central feature of the Access Grid nodes. Access Grid nodes are "designed spaces" that explicitly contain the high-end audio and visual technology needed to provide a high-quality compelling user experience.
The Globus Project is developing fundamental technologies needed to build computational grids. Grids are persistent environments that enable software applications to integrate instruments, displays, computational and information resources that are managed by diverse organizations in widespread locations.
The Grid refers to an infrastructure that enables the integrated, collaborative use of high-end computers, networks, databases, and scientific instruments owned and managed by multiple organizations. Grid applications often involve large amounts of data and/or computing and often require secure resource sharing across organizational boundaries, and are thus not easily handled by today's Internet and Web infrastructures.
Globus software development has resulted in the Globus Toolkit, a set of services and software libraries to support Grids and Grid applications. The Toolkit includes software for security, information infrastructure, resource management, data management, communication, fault detection, and portability.
Groups around the world are using the Globus Toolkit to build Grids and to develop Grid applications. Globus research targets technical challenges that arise from these activities. Typical research areas include resource management, data management and access, application development environments, information services, and security.
- The Global Grid Forum
The Global Grid Forum (GGF) is a community-initiated forum of individual researchers and practitioners working on distributed computing, or "grid" technologies. GGF is the result of a merger of the Grid Forum, the eGrid European Grid Forum, and the Grid community in Asia-Pacific.
GGF goals are to:
- To facilitate and support the creation and development of regional and global computational grids that will provide to the scientific community, industry, government and the public at large dependable, consistent, pervasive and inexpensive access to high-end computational capabilities;
- To address architecture, infrastructure, standards and other technical requirements for computational grids and to facilitate and find solutions to obstacles inhibiting the creation of these grids;
- To educate the scientific community, industry, government and the public regarding the technologies involved in, and potential uses and benefits of, computational grids;
- To facilitate the application of grid technologies within educational, research, governmental, healthcare and other industries;
- To provide a forum for exploration of computational grid technologies, applications and opportunities, and to stimulate collaboration among the scientific community, industry, government and the public regarding the creation, development and use of computational grids;
- Survey of Computational Grid, Meta-computing and Network Information Tools.
A software environment of un-precidented quality and functionality is emerging in which coupled computing resources are accessed via client-server and Web-based tools. This is driven by a combination of the computer industry, which is rapidly developing software for e-commerce and leisure use, and the loose collection of worldwide ``freeware'' programmers. Geoffrey Fox has referred to it as the ``Distributed Commodity Computing and Information System''. In this survey we examine a number of tools and projects for science and engineering applications on wide-area network based systems. This includes computational steering and meta-computing techniques.
- NPACI Metasystems Thrust area
NPACI is pioneering software development and integration through its thrust areas, which enable the NPACI leadership to coordinate the activities across the 48 partner sites. Thrust area activities are organized into projects, each of which also has ties to ongoing, separately funded research. By design each project joins activities in both Technologies and Applications thrust areas and teams partners from several institutions. This approach is breaking down barriers that traditionally have separated computer and applications scientists. It is also building the persistent intellectual framework necessary to address increasingly complex problems.
Legion is an object-based, meta-systems software project at the University of Virginia. The goal has been to achieve a highly useable, efficient and scalable system addressing key issues such as scalability, programming ease, fault tolerance, scurity, sit autonomy etc.
The National Computational Science Alliance is developing the TeraGrid, which will be the world's largest, fastest, most comprehensive, distributed infrastructure for open scientific research when deployed next year.
The TeraGrid is a $53 million effort funded by the National Science Foundation that involves four partners: NCSA, the lead organization in the Alliance; the San Diego Supercomputer Center, the lead organization in the National Partnership for Advanced Computational Infrastructure (NPACI); Argonne National Laboratory, a key Alliance partner; and the California Institute of Technology (Caltech), a key NPACI partner. When completed, the TeraGrid will include 13.6 teraflops of Linux Cluster computing power distributed at the four TeraGrid sites, facilities capable of managing and storing more than 450 terabytes of data, high-resolution visualization environments, and toolkits for grid computing.
- Oak Ridge High Performance Computing
Oak Ridge National Laboratory is involved in research in the areas of terascale computing, high performance storage, high performance networking, software tools and applications, visualization and virtual labs.
- European Data Grid
The European DataGrid is a project funded by the European Union with the aim of setting up a computational and data-intensive grid of resources for the analysis of data coming from scientific exploration. Next generation science will require co-ordinated resource sharing, collaborative processing and analysis of huge amounts of data produced and stored by many scientific laboratories belonging to several institutions.
The main goal of the DataGrid initiative is to develop and test the technological infrastructure that will enable the implementation of scientific collaborations where researchers and scientists will perform their activities regardless of geographical location. It will also allow interaction with colleagues from sites all over the world as well as the sharing of data and instruments on a scale previously unattempted.
The project will devise and develop scalable software solutions and testbeds in order to handle many PetaBytes of distributed data, tens of thousand of computing resources (processors, disks, etc.), and thousands of simultaneous users from multiple research institutions.
The DataGrid initiative is led by CERN, the European Organization for Nuclear Research, together with five other main partners and fifteen associated partners. The project brings together the following European leading research agencies: the European Space Agency (ESA), France's Centre National de la Recherche Scientifique (CNRS), Italy's Istituto Nazionale di Fisica Nucleare (INFN), the Dutch National Institute for Nuclear Physics and High Energy Physics (NIKHEF) and UK's Particle Physics and Astronomy Research Council (PPARC). The fifteen associated partners come from the Czech Republic, Finland, France, Germany, Hungary, Italy, the Netherlands, Spain, Sweden and the United Kingdom.
- Particle Physics Data Grid
The Particle Physics Data Grid project aims to develop, acquire and deliver vitally needed Grid-enabled tools for data-intensive requirements of particle and nuclear physics. Novel mechanisms and policies will be vertically integrated with Grid middleware and experiment-specific applications and computing resources to form effective end-to-end capabilities. PPDG is a collaboration of computer scientists with a strong record in distributed computing and Grid technology, and physicists with leading roles in the software and network infrastructures for major high-energy and nuclear experiments. Together they have the experience, knowledge and vision in the scientific disciplines and technologies required to bring Grid-enabled data manipulation and analysis capabilities to the desk of every physicist.
- Network for Earthquake Engineering Simulation
NEESgrid (Network for Earthquake Simulation Grid) is a virtual laboratory for earthquake engineering which will provide data storage facilities and repositories for wide spread earthquake engineering research sites. It will also augment existing experimental methods used by the earthquake research community with computational approaches. The computational methods will require the development of numerical models that can predict the responses of buildings, various construction materials, or specific structural members under a variety of loadings. When used in conjunction with experimental techniques, computational methods provide a framework for new approaches in engineering analysis. NEESgrid will serve three communities of researchers:
- Structural Engineering that looks at the impact of seismic activity on man-made structures
- Geotechnical Engineering that looks at the interaction between seismic activity, subsurface soil and rock, and the foundations and infrastructures of man-made structures
- Tsunami Research that looks at the formation and effects of tsunamis
- SRB (Storage Resource Broker)
The SDSC Storage Resource Broker (SRB) is client-server middleware that provides a uniform interface for connecting to heterogeneous data resources over a network and accessing replicated data sets. SRB, in conjunction with the Metadata Catalog (MCAT), provides a way to access data sets and resources based on their attributes rather than their names or physical locations.
Storage systems handled by the current release of the SRB include the UNIX file system, archival storage systems such as UNITREE and HPSS, and database Large Objects managed by various DBMS including DB2, Oracle and Illustra.
- The Virtual Observatory Forum
Technological advances in telescope and instrument design as well as the exponential increase in computer and communications capability have caused a dramatic change in the character of astronomical research. Large scale surveys of the sky from space and ground are being initiated at wavelengths from radio to x-ray, thereby generating vast amounts of high quality irreplaceable data. The large size and complexity of this data means that new tools and structures are required in order to discover the complex phenomena encoded within the data. The Virtual Observatory will link the archival data sets of space- and ground-based observatories, the catalogues of multi-wavelength surveys, and the computational resources necessary to support comparison and cross-correlation among these resources. In order to achieve these aims the VO will be involved in:
- The establishment of a common systems approach to data pipelining, archiving and retrieval that will ensure easy access by a large and diverse community of users and that will minimize costs and time to completion;
- Enabling the distributed development of a suite of commonly usable new software tools to make possible querying, correlation, visualization and statistical comparisons;
- Co-ordinating the establishment of high speed data transfer networks that are essential to providing the connectivity among archives, terascale computing facilities, and the widespread community of users;
- Facilitating productive collaborations among astronomy centres and major academic institutions in order to maximize productivity and minimize infrastructure costs;
- Ensuring communication and possible collaborations with scientists in other disciplines facing simlar problems, and with the private sector;
- Maintaining a continuing program of public and educational outreach that capitalizes upon the unique resources, in both data and software, of the VO to provide a unique window into astronomy and scientific methodology.
- Cluster Computing
This is Rajkumar Buyya's home page and under the title of Info. Centre gives web links for cluster and grid computing.
- IEEE Task Force on Cluster Computing
The TFCC is an international forum promoting cluster computing research and education. It participates in helping to set up and promote technical standards in this area. The Task Force is concerned with issues related to the design, analysis, development and implementation of cluster-based systems. Of particular interest are: cluster hardware technologies, distributed environments, application tools and utilities, as well as the development and optimisation of cluster-based applications.
- CUMULVS, Collaborative User Migration,
User Library for Visualization and Steering
CUMULVS is a software infrastructure for the development of collaborative environments. It supports interactive visualization and remote computational steering of distributed applications by multiple collaborators, and provides a mechanism for constructing fault-tolerant, migrating applications in heterogeneous distributed computing environments.
CUMULVS is a project at ORNL which aims to assist in the development of parallel and distributed applications. CUMULVS allows scientists to easily incorporate fault tolerance, interactive visualization and computational steering into their applications. The system is a valuable new tool for use in many large scientific simulations because it allows the scientist to visually monitor large data fields of an ongoing computation and to remotely control algorithmic and model parameters while the application is running. In addition, CUMULVS provides a simple way to incorporate checkpointing and distributed task migration inside large applications. This facility supports automatic recovery / restart of application tasks, even across heterogeneous architecture and topology boundaries.
CUMULVS provides several important features for the computational scientist. It handles the details of collecting and sending distributed data fields to, and receiving steering parameters from, multiple dynamically attached viewers. The viewers provide a uniform global view of data, even if the data is decomposed across many distributed tasks. CUMULVS manages all aspects of the dynamic attachment and detachment of multiple viewers to a running simulation. Viewers can be commercial packages such as AVS, public domain software such as Tcl/Tk, or customized viewers for specific application domains.
CUMULVS produces time-coherent views of application data that could potentially be changing asynchronously on parallel computers all across the nation. CUMULVS ensures the coherency of steering parameter updates when multiple collaborators are viewing and steering the application at the same time. And changes to steering parameters are coordinated across the application task so that updates are applied at a consistent time step in each task.
- Clustor (Nimrod)
Clustor(tm) is a software development tool for utilizing the power of cluster computing. Unlike other tools for development of parallel programs, Clustor requires no programming and no changes to applications. With Clustor, applications can be easily and quickly enhanced with parallel execution. Clustor greatly simplifies a common activity for every engineer, researcher or scientist - running the same program code numerous times with different sets of input parameters.
To perform a computational task, the user simply specifies input parameters and task commands to be executed. Clustor does the rest. It generates jobs to be computed - often numbered in thousands, executes the jobs and collects the results.
Clustor provides the following benefits and features:
- Clustor requires no programming. Custom user interfaces can be prepared through a simple, intuitive graphical interface.
- Clustor simplifies generation of jobs by providing an easy way to specify input parameters and to manage output results.
- Clustor speeds up the execution by distributing the jobs over a network of computers. The distribution of jobs and the collection of the results are done transparently to the user. If required, Clustor can be easily integrated with industry standard batch managers and load distribution programs.
- Clustor runs on major Unix platforms and WindowsNT.
Condor is a High Throughput Computing environment that can manage very large collections of distributively owned workstations. Its development has been motivated by the ever increasing need of scientists and engineers to harness the capacity of such collections. The environment is based on a novel layered architecture that enables it to provide a powerful and flexible suite of Resource Management services to sequential and parallel applications.
Condor views the owners of the resources as holding the key to the success of a High Throughput Computing environment. It therefore pays special attention to the rights and sensitivities of the workstation owners. It is the owner of each and every workstation in the collection who defines the conditions under which the workstation can be allocated by Condor to an external user. By means of its unique remote system call capabilities, Condor preserves a large measure of the originating machine's environment on the execution machine, even if the originating and execution machines do not share a common file system and/or user ID scheme. Condor jobs that consist of a single process are automatically checkpointed and migrated between workstations as needed to ensure eventual completion.
- Ninf, Network Based Information Library for high performance
Ninf is an ongoing global computing infrastructure project which allows users to access computational resources including hardware, software and scientific data distributed across a wide area network with an easy-to-use interface. Users can build applications by calling the libraries with the Ninf Remote Procedure Call, which is designed to provide a programming interface similar to conventional function calls, and is tailored for scientific computation. In order to facilitate location transparency and network-wide parallelism, Ninf metaserver maintains global resource information regarding computational server and databases, allocating and scheduling coarse-grained computation to achieve good global load balancing.
The basic Ninf system supports client-server based computing. The computational resources are available as remote libraries at a remote computation host which can be called through the global network from a programmer's client program written in existing languages such as Fortran, C, or C++. The parameters, including large arrays, are efficiently marshalled and sent to the Ninf server on a remote host, which in turn executes the requested libraries, and sends back the result. The Ninf remote procedure call (RPC) is designed to provide a programming interface which will be very familiar to the programmers of existing languages. The programmer can build a global computing systems by using the Ninf remote libraries as its components, without being aware of the complexities and hassles of network programming.
- ANUSF Vizlab
- Sydney Vislab
- Interactive Information Institute RMIT
- Electronic Visualization Laboratory University of Illinois at Chicago.
- VRML Virtual Reality Modeling
The Virtual Reality Modeling Language (VRML) is a language for describing multi-participant interactive simulations -- virtual worlds networked via the global Internet and hyper-linked with the World Wide Web. All aspects of virtual world display, interaction and internetworking can be specified using VRML. It is the intention of its designers that VRML become the standard language for interactive simulation within the World Wide Web.
Quickly develop solutions to complex computing problems using the Khoros Pro 2001 Integrated Development Environment (IDE). Available for a variety of UNIX platforms, Khoros Pro 2001 lets you rapidly prototype solutions, develop new software, manage complex software configurations, and integrate diverse software programs into a uniform framework. Khoros Pro 2001 delivers flexible functionality and increases productivity in a wide variety of application areas.
AVS is a visual programming environment for scientific visualization applications.
As networking technology brings us into the information age, 3D and multimedia applications become more prevalent and the emerging norm. Improvements in communication infrastructure also allow us to explore the realm of group interaction and collaboration over the information superhighway. Our work focuses on providing a small group of geographically distributed scientists the means of sharing their data and interactively creating visualizations and analyzing them. This allows for shorter turn around time compared to more traditional means which in turn allows for greater research productivity. The key features of our system include: different levels of information sharing, incremental updates to reduce network traffic, an intuitive floor control strategy for coordinating access to shared resources, a built in session manager to handle participants who either join late or leave early, and a host of collaborative 3D visualization aids. An added feature is that while the system is designed for small group collaborations, it can also be used for briefing a larger audience. These features are included in the CSpray collaborative 3D visualization system.
Tecate is a software platform for doing exploratory visualization of data collected from networked data sources. It provides the infrastructure for applications that allow end-users to browse the contents of data sources as well as allow them to inspect, measure, compare, and identify patterns in selected data-sets. Tecate provides interfaces to the World-Wide Web, and to databases managed by database management systems. In addition, Tecate dynamically crafts user-interfaces and interactive visualizations of data-sets with the aid of an expert system. This system automatically maps many kinds of data-sets into virtual worlds that can be explored by end-users. In describing these worlds, Tecate uses an interpreted language that is capable of arbitrary computations, and the mediation of communication among different processes.
VisAD is a Java component library for interactive and collaborative visualization and analysis of numerical data. The name VisAD is an acronym for "Visualization for Algorithm Development". The system combines:
- The use of pure Java for platform independence and to support data sharing and real-time collaboration among geographically distributed users. Support for distributed computing is integrated at the lowest levels of the system using Java RMI distributed objects.
- A general mathematical data model that can be adapted to virtually any numerical data, that supports data sharing among different users, different data sources and different scientific disciplines, and that provides transparent access to data independent of storage format and location (i.e., memory, disk or remote). The data model has been adapted to netCDF, HDF-5, FITS, HDF-EOS, McIDAS, Vis5D, GIF, JPEG, TIFF, QuickTime, ASCII and many other file formats.
- A general display model that supports interactive 3-D, data fusion, multiple data views, direct manipulation, collaboration, and virtual reality. The display model has been adapted to Java3D and Java2D and used in an ImmersaDesk virtual reality display.
- Data analysis and computation integrated with visualization to support computational steering and other complex interaction modes.
- Support for two distinct communities: developers who create domain- specific systems based on VisAD, and users of those domain-specific systems. VisAD is designed to support a wide variety of user interfaces, ranging from simple data browser applets to complex applications that allow groups of scientists to collaboratively develop data analysis algorithms.
- Developer extensibility in as many ways as possible.
This public domain package from the SSEC is a highly interactive 3d fluid flow visualization package. It is targetted at meteorological flow but it useful for any flow visualization problems.
- Side Effects Houdini
A very powerful suite of modelling, animation and compositing tools. Used for both realtime VR environments, and animating visualization sequences to video. The system has a steep learning curve - ie it doesn't suit the occasional user, but provides expert users with all the functionality needed to produce animation of the highest quality.
Real-time non-linear editing and web streaming system. By combining dual-stream real-time hardware and powerful NLE software into a fully-integrated solution, dpsVelocity provides digital video and content creation professionals with faster, sharper, and easier edits for video, broadcast, CD-ROM, DVD, and the Internet, including live webcasting.
- Pixar's PhotoRealistic Renderman
This 3D rendering software provides support for scenes of high complexity, and employs an open procedural interface to material properties through their 'Shader' language.