Problem Based Benchmarks

Comparing algorithms using benchmarks has always been an important way of understanding the relative advantages of different algorithms at a more detailed level than using big-O, and also a way to understand the robustness of the cost models we use.  With the advent and ubiquitous availability of a broad variety of parallel architectures such benchmarking has become even more important.  Without common benchmarks, how does one compare algorithms for GPUs, multicores and distributed machines using dozens of different programming paradigms and with no widely applicable or accepted cost model?  Unfortunately, and perhaps surprisingly, there is a real lack of common benchmark suites defined in terms of problem interfaces that can be used for this purpose.  The few standardized benchmarks tend to be quite specific, such as the database sort benchmark which is only for data stored on disk, and tested on random data in a particular format.

In this talk I'll discuss a benchmark suite we have been developing to help alleviate this problem.  In the suite the benchmarks are defined purely in terms of the problem interface, and each benchmark is associated with a suite of test inputs.  The problems are selected to be reasonably broad, but simple enough that the algorithms that solve the problems can be readily understood and compared.  The benchmarks include problems such as comparison sort, removing duplicates, all nearest-neighbors, 2d Delaunay triangulation, n-body force calculation, maximal-independent-set, minimum-spanning tree, ray-triangle intersection, suffix arrays, breadth-first-search tree, and k-means.  In addition to the benchmarks and various problems in defining them in a general way, the talk will discuss a particular parallel implementation of the benchmark suite.

Guy E. Blelloch, Carnegie Mellon University


Donate · Contact Us · Site Map · Join SIAM · My Account
Facebook Twitter Youtube linkedin google+