First SIAM Workshop on Combinatorial Scientific ComputingDecember 26, 2004
Alan George was recognized at the workshop for pioneering contributions to the field of combinatorial scientific computing.
Alex Pothen and Bruce Hendrickson
Scientific computing is traditionally viewed as the province of continuous mathematics, of differential equations and linear algebra. Many combinatorial subproblems, however, arise in the solution of scientific computing problems.
"Combinatorial scientific computing," a recently coined label for interdisciplinary research involving discrete mathematics in scientific computing, refers to the development, analysis, and application of combinatorial algorithms to solve problems in computational science and engineering.
Sparse Matrices Among Themes at CSC Workshop
The name may be new, but combinatorial scientific computing has been an active research area for decades. Thus far, however, researchers in CSC have been isolated in subcommunities delineated primarily by the application areas in which they work.
No longer! To build a community of researchers in combinatorial scientific computing, and to provide a forum for discussion and collaboration, SIAM sponsored the Workshop on Combinatorial Scientific Computing (CSC04), held in San Francisco, February 27 and 28, immediately following the 11th SIAM Conference on Parallel Processing for Scientific Computing. The workshop was organized by John Gilbert, Horst Simon, and Sivan Toledo, along with the authors of this article.
The workshop opened with recognition for Alan George, one of the pioneering researchers in CSC, on the milestone of his 60th birthday. George, a professor of computer science and dean of the Faculty of Mathematics at the University of Waterloo, was recognized for several significant contributions to the field of CSC, in particular his work on sparse matrix computations. His many achievements include graph models and algorithms for solving sparse systems of linear equations; SPARSPAK, one of the first software libraries for solving linear equations and least-squares problems; one of the first books on sparse direct solvers; and parallel algorithms for solving sparse systems of equations. A plaque recognizing his research contributions and his role as a mentor to many working in the field was presented to George on behalf of the CSC community.
Helping to set the tone at CSC04 were three invited plenary speakers: Dan Gusfield of the University of California at Davis ("Combinatorial Optimization in Computational Biology and Bio-informatics"), Shang-Hua Teng of Boston University and Akamai Technologies ("Smoothed Analysis of Algorithms"), and Richard Brualdi of the University of Wisconsin ("Some Highlights of Combinatorial Matrix Theory"). In all, 21 speakers discussed six broad themes in CSC: sparse matrix computations, performance aspects of algorithms, automatic differentiation and sparse optimization, mesh generation, combinatorial matrix theory, and computational biology. The list of speakers, and the titles and extended abstracts of their talks, can be found at http://www.tau.ac.il/~stoledo/csc04/.
A Sample of Hot Topics in CSC
Several of the combinatorial themes discussed at the workshop have taken center stage in CSC research during the past twenty years. To give readers a taste of research in CSC, we briefly describe developments in two active research areas.
The first is the development of combinatorics-based preconditioners via support theory. One of the key challenges that arise in the solution of sparse linear systems of equations by iterative solvers is the need for a preconditioner matrix: one that will make the system of equations converge to a solution faster. Preconditioning theory depends on techniques for bounding the extremal eigenvalues of the preconditioned coefficient matrix. A surprising result of support theory is that for some important classes of matrices, these bounds can be computed by studying graph embeddings. A graph that corresponds to the original matrix is embedded in that of the preconditioner matrix, so that each edge in the original graph is "supported" by a path in the graph of the preconditioner.
Various bounds can be computed from these embeddings. In the original approach, developed by Pravin Vaidya, the condition number is bounded by the product of the "dilation," the maximum length of such a path, and the "congestion," the number of times an edge of the preconditioner graph is used in the path. Since that time, support theory has been studied extensively by several others, including Sivan Toledo, Doron Chen, Erik Boman, and John Gilbert. Researchers have devised a variety of combinatorial and algebraic extensions to the original ideas, and combinatorial preconditioners based on these ideas have been implemented.
In recent work, Dan Spielman and Shang-Hua Teng developed a preconditioner that allows the solution of all symmetric, diagonally dominant matrices in near-linear time; these results apply to all diagonally dominant matrices, and not just those arising from elliptic partial differential equations. Their approach uses ideas from graph partitioning and graph spanners. Naturally, all of these techniques can be reinterpreted in the traditional language of linear algebra, but the transformation into a combinatorial problem opens the way for the application of novel perspectives and tools. This is a recurring theme of work in combinatorial scientific computing.
The second development high-lighted here is the use of combinatorial techniques for nonlinear problems or, more precisely, for the efficient computation of Jacobians and Hessians. For many years, graph coloring has been used for efficient computation of finite-difference approximations to these derivative matrices. Automatic differentiation, an alternative to finite differences, also gives rise to coloring and other combinatorial problems.
Working in the U.K. in the 1970s, Alan Curtis, Mike Powell, and John Reid reduced the number of (vector) function evaluations needed to compute sparse Jacobians by packing together structurally orthogonal columns (a group of columns is structurally orthogonal when the nonzeros are in disjoint (non-overlapping) rows). During the following decade, Tom Coleman and Jorge Moré showed that graph-coloring models can be used to formulate the problem of minimizing the number of function evaluations needed, and that the use of coloring algorithms can lead to efficient computations of sparse Jacobians and Hessians. A total of eight variant coloring problems arise in this context.
A comprehensive re-examination of this problem by Assefaw Gebremedhin, Fredrik Manne, and one of us (A.P.) has provided a unifying framework for the coloring formulations of the eight problems. This work highlights the central role of distance-2 coloring as a model for structural orthogonality. A distance-k coloring of a graph corresponds to a coloring of its vertices such that every vertex at a distance of at most k edges from a given vertex is assigned a color distinct from that vertex's color. The unified framework leads to the formulation of the matrix-estimation problems as distance-2, "distance-3/2," and acyclic coloring problems. In an acyclic coloring of a graph, every vertex is assigned a color distinct from that of any of its neighboring vertices; in addition, every cycle has vertices with at least three distinct colors. These formulations lead to new and more efficient algorithms for estimating Jacobians and Hessians.
Combinatorial problems akin to vertex elimination in sparse matrix factorization also arise in automatic differentiation. AD exploits the fact that the function to be differentiated is computed within a program by composing elementary arithmetic operations with intrinsic functions, such as log or sin, in some order that can be represented by a directed computational graph. The input variables, the output variables, and each intermediate variable in the derivative computation are the vertices in the computational graph. Each edge of the graph is labeled with the derivative of the operation that transforms its input variable to its output variable. AD computes derivatives by forming products of the edge labels along paths in the computational graph and summing these products as needed.
As in sparse Gaussian elimination, the computational graph can be transformed by the elimination of a vertex or edge, with suitable fill edges added to reflect the elimination. Elimination in this context corresponds to the accumulation of partial derivatives via the chain rule. In recent work, Uwe Naumann used the line graph of the computational graph to formulate a more general face-elimination technique.
Paul Hovland and Jean Utke presented several open questions about this elimination model at the CSC04 workshop. Here are two of them: (1) What is the optimal order of eliminations for reducing the number of operations needed to compute the derivative matrix? (2) At the end of the elimination of the intermediate variables, we have a bipartite graph that connects input variables to output variables, but this graph might not be sparse. How do we identify a computational graph with the fewest edges at some intermediate stage during the elimination, so that the elimination can be stopped then to obtain a minimum storage representation of a Jacobean?
Further developments in CSC are described in a special issue of the Electronic Transactions in Numerical Analysis (ETNA), dedicated to Alan George and devoted to the themes of the CSC workshop, which will be published in 2005. A listserv for the CSC community is available at http://list.odu.edu/listinfo/CSC.
The Second International Workshop on Combinatorial Scientific Computing (CSC05), organized in cooperation with SIAM, CERFACS, ENSEEIHT-IRIT, and INRIA, will be held at CERFACS in Toulouse, France, June 21-23, 2005. Further information is available at http://www.cerfacs.fr/algor/CSC05. We hope to see you in Toulouse!
For Further Reading
E.G. Boman and B. Hendrickson, Support theory for preconditioning, SIAM J. Matrix Anal. Appl., 25:3 (2003), 694-717.
E. G. Boman, D. Chen, B. Hendrickson, and S. Toledo, Maximum-weight-basis preconditioners, Num. Lin. Alg. Appl., 11 (2004).
A.H. Gebremedhin, F. Manne, and A. Pothen, What color is your Jacobian? Graph coloring for computing derivatives, preprint, http://www.cs.odu.edu/~pothen/Papers/coloring.pdf, 2004.
U. Naumann, Optimal accumulation of Jacobians by elimination methods on the dual computational graph, Math. Program., to appear.
D. Spielman and S. Teng, Nearly linear time algorithms for graph partitioning, graph sparsification, and solving linear systems, http://arxiv.org/abs/cs.DS/0310051.
Alex Pothen is a professor of computer science at Old Dominion University, and Bruce Hendrickson is a Distinguished Member of Technical Staff in the Discrete Math and Algorithms Department at Sandia National Laboratories, Albuquerque.