Stanford University, Stanford, California
Jack Dongarra, University of Tennessee, Knoxville; and Oak Ridge National Laboratory
Present computers, even workstations, allow the solution of very large scale problems in science and engineering. Most often a major part of the computational effort goes in solving linear algebra subproblems. We will discuss a variety of algorithms for these problems indicating where each is appropriate and emphasizing their efficient implementation. In particular, the development of vector and parallel computers since the late 1970s led to a critical review of mathematical software. Many of the sequential algorithms used satisfactorily on traditional machines fail to exploit the architecture of advanced computers. We briefly review some of the features of modern computer systems and illustrate how the architecture affects the potential performance of linear algebra algorithms. We will consider recent techniques devised to utilize advanced architectures more fully, especially the design of the Level 1, 2, and 3 BLAS. We will highlight the LAPACK package which provides a choice of algorithms mainly for dense matrix problems that are efficient and portable on a variety of high performance computers.
For large sparse linear systems, the situation is more complicated and a wide range of algorithms is available. We will give an introduction to this field and guidelines on the selection of appropriate software. We will consider both direct methods and iterative methods of solution, including some recent work that can be viewed as a hybrid of the two. In the case of direct methods, we will emphasize frontal and multifrontal methods including variants performing well on parallel machines. For iterative methods, our discussion will include CG, BiCG, QMR, CGS, BiCGSTAB, GMRES, and LSQR. For large (sparse) eigenproblems we will discuss some of the most widely used methods such as Lanczos, Arnoldi, and Davidson. Particular attention will be given to efficient implementation of Arnoldi's method and the Implicitly Restarted Arnoldi Method will be discussed along with guidelines for their use, preconditioning (and hints for the selection of these algorithms).
Finally, we address the challenge facing designers of mathematical software in view of the development of highly parallel computer systems. We shall discuss ScaLAPACK, a project to develop and provide high performance scalable algorithms suitable for highly parallel computers. We will also consider techniques for implementation of sparse matrix algorithms in a highly parallel environment, in particular the solution of sparse linear equations.
The course is 20% introductory; 50% intermediate; 30% Advanced
Those interested in mathematical software, computational science, or numerical analysis will benefit from this tutorial. The tutorial should be of particular benefit to researchers and graduate students interested in the efficient solution of large-scale linear algebra problems (solution of linear systems, eigenproblems). We will discuss techniques employed in the efficient coding of linear algebra kernels in software packages, and tools for the development and analysis of parallel algorithms on shared and distributed memory machines.
The lectures assume a general knowledge of numerical linear algebra and some familiarity with high-performance computers.
Jack Dongarra holds a joint appointment as Distinguished Professor of Computer Science in the Computer Science Department at the University of Tennessee (UT) and as Distinguished Scientist in the Mathematical Sciences Section at Oak Ridge National Laboratory (ORNL). He specializes in numerical algorithms in linear algebra, parallel computing, use of advanced-computer architectures, programming methodology, and tools for parallel computers. Other current research also involves the development, testing, and documentation of high quality mathematical software. He was involved in the design and implementation of the software packages EISPACK, LINPACK, the BLAS, LAPACK, ScaLAPACK, the BLACS, MPI, and PVM/HeNCE; and is currently involved in the design of algorithms and techniques for high performance computer architectures.
Iain S. Duff is currently Group Leader of Numerical Analysis in the Central Computing Department at the Rutherford Appleton Laboratory. He is also the Project Leader for the Parallel Algorithms Group at CERFACS in Toulouse and is a visiting professor at the University of Strathclyde. Duff obtained a first-class honors degree in mathematics and natural philosophy from the University of Glasgow in 1969 and was awarded a D.Phil.in mathematics from Oxford University in 1972. Before April 1990, he was Group Leader of Numerical Analysis at the Harwell Laboratory, where he was working in the Computer Science and Systems Division at Harwell since 1975. Before joining Harwell, he was a Harkness Fellow visiting Stony Brook and Stanford and spent two years as a lecturer in computing science at the University of Newcastle. He has had several extended visits to Argonne National Laboratory, the Australian National University, the University of Colorado at Boulder, Stanford University, and the University of Umea, Sweden. He is a fellow of the Institute of Mathematics and Its Applications, a member of SIAM, and an editor and associate editor of several journals. His main interests are sparse matrices, mathematical software, and parallel computation.
Danny C. Sorensen received his Ph.D. in 1977 from the Department of Mathematics, University of California, San Diego. His research interests are in numerical analysis and parallel computation. His specialties include numerical linear algebra, use of advanced computer architectures, programming methodology and tools for programming parallel computers, numerical methods for nonlinear optimization. His recent work includes the development of Implicitly Restarted Arnoldi Methods and the related software package ARPACK for large scale eigenvalue problems. Dr. Sorensen was an Assistant Professor of Mathematics at University of Kentucky from 1977-1980, and was a Senior Computer Scientist in the Mathematics and Computer Science Division at Argonne National Laboratory from 1980-1989. He is now a Professor in the Mathematical Sciences Department of Rice University. He has been Visiting Professor at the Department of Operations Research, Stanford University and at the Department of Mathematics, University of California San Diego. He has also been a visiting scientist at RIACS, CERFACS, and with the Center for Supercomputing Research and Development at University of Illinois at Urbana.
Noel M. Nachtigal is a research staff member in the Mathematical Sciences Section of the Oak Ridge National Laboratory. He has received his Ph.D. in applied mathematics in 1991 from the Massachusetts Institute of Technology, for a thesis on the look-ahead Lanczos algorithm and its application to the quasi-minimal residual method. Before coming to ORNL as a Householder fellow in 1993, he held a two-year postdoctoral appointment at the Research Institute for Advanced Computer Science.
He is co-author of the software package QMRPACK, implementing several QMR algorithms based on the look-ahead Lanczos process. His current research interests include iterative methods for linear systems, with an emphasis on applications.
8:30-10:00 Dense Matrix Algorithms and Software
10:30-12:00 Sparse Direct Methods and Software for Systems of Equations
1:30-3:00 Iterative Methods and Software for Systems of Equations
3:30-5:00 Iterative Methods and Software for Eigenvalue Problems
5:00 Short Course adjourns
The following books will be available to attendees at discounted prices during the meeting.
Seats are limited. We urge attendees to register early to take advantage of the advance rates. To register for either short course, please complete the Preregistration Form and return it, with your payment, to reach SIAM on or before June 30, 1997.
Registration fees include course notes, coffee breaks, and lunch.
AN97 Homepage | Program Updates|
Registration | Hotel and Dormitory Information | Transportation | Program-at-a-Glance | Program Overview