Saturday, September 23

MS31
Getting More from Parallel Computing: Better Languages, Libraries and Algorithms

10:30 AM-12:30 PM
Center City 1

MPI has achieved acceptance as a preferred approach for implementing distributed memory parallel (DMP) applications. OpenMP is gaining a similar status for shared memory parallel (SMP) applications. At the same time, there is still tremendous opportunity for improvements. In this minisymposium, we present four efforts to provide better parallel computing capabilities outside of the traditional techniques. Specifically we present languages with parallel expressions, communications libraries that are aware of memory architecture and algorithms that take advantage of emerging architectures such as SMP clusters. We believe that such efforts are important to making qualitative advances in parallel computing.

Organizer: Michael A. Heroux
Sandia National Laboratories, USA
10:30-10:55 The Shared-Address-Local-Copy (SALC) Programming Model
Robert W. Numrich, Cray, Inc., USA
11:00-11:25 Enhancing MPI Applications Through Selective Use of Shared Memory on SMPs
David N. Shirley, Abba Technologies, Inc., USA
11:30-11:55 Hybrid Multithreaded/Message Passing On SMP Clusters
Edmond Chow, Lawrence Livermore National Laboratory, USA
12:00-12:25 Exploiting Shared Memory within Distributed Memory via Hybrid Algorithms
Michael A. Heroux, Organizer; and David Shirley, Abba Technologies, Inc., USA

©2000, Society for Industrial and Applied Mathematics
Designed by Donaghy's Web Consulting
Created 6/14/00; Updated 6/14/00