The Greening of High-Performance Computing

June 11, 2008


Without a shift in design, Horst Simon says, power costs could eclipse hardware costs, jeopardizing the affordability of large-scale computing systems.

Michelle Sipics

Power consumption and energy conservation, increasingly hot topics over most of the past decade, became a priority in high-performance computing only recently. These concerns are now a top focus for many cluster operators---as well as researchers in HPC.

Horst Simon highlighted this priority shift in a recent talk, "Energy Efficiency and the Greening of High Performance Computing," part of the Distinguished Lecture Series in Petascale Simulation hosted by the Texas Advanced Computing Center at the University of Texas at Austin. Simon is associate laboratory director for computing sciences at Lawrence Berkeley National Laboratory and an adjunct professor at the University of California, Berkeley (and moderator of the panel discussion on parallel processing described in this issue of SIAM News).

Simon cited general computing industry statistics from 2000 to illustrate the vast cost of power: All the electronics in the U.S. accounted for about 200 terawatt hours/year---a total cost of about $16 billion/year, along with 150 million tons of CO2. Those numbers, he points out, continue to increase.
The situation is similar in high-performance computing, where cooling and auxiliary equipment, high-end, mid-range and volume servers all contribute to energy costs. Without a shift in design, Simon suggested, power costs could actually eclipse hardware costs---putting the affordability of large-scale systems in jeopardy. In fact, facility issues, power, and cooling recently jumped to the top of the list of concerns for clusters.

To date, only a few researchers have focused on power consumption as an issue in computer architecture, and interest in the topic is just now beginning to increase. Few data are available to address power issues in HPC architectures. That should change, however, with several new projects under way at LBNL and the National Energy Research Scientific Computing Center (NERSC). Simon presented results of recent work by Kamil, Shalf, and Strohmaier (http://www.lbl.gov/cs/html/energy%20efficient%20computing.html) to develop measurement standards. Already, benchmarks run on the project's current test system have highlighted the role operating systems can play in power consumption: With the system running under a full load, only a small difference in power usage was seen when two different operating systems were compared; when the machine was idling, however, the difference was significant.

Development of multicore chips has been the computer industry's approach to keeping power consumption in check--two cores consume less power than a single core running twice as fast. But the industry's approach to parallelism is for the most part conservative, starting with the relatively complex cores designed during the single-core era, then doubling the number of cores as transistor density doubles. Efficient use of multicore chips is the focus at two new Universal Parallel Computing Research Centers, one at UC Berkeley and the other at the University of Illinois at Urbana–Champaign. Supported by an Intel/Micro-soft initiative, the centers will work to advance mainstream parallel computing--in part by addressing the problems of load-balancing and scheduling presented by multi- and many-core processors and systems.

Looking beyond multicore, LBNL researchers are investigating a more radical approach that combines a very large number of simple cores on each chip, leading to lower clock rates and reduced power consumption without loss of performance. The researchers are also borrowing design techniques from the consumer electronics industry to tailor chip design to the needs of applications. Because consumer electronics devices often run on battery power, power reduction is a first-order design concern. The LBNL researchers anticipate that their approach, code-named "Green Flash," could lead to improvements by a factor of 100 or more in power efficiency and effective performance over business-as-usual supercomputing. In discussing this approach, Simon stressed the importance of good design specifically tailored for HPC systems; designing for serial performance, he said, is a major source of "waste" in HPC, along with wasted bandwidth (data movement) and transistors (surface area). The embedded market---particularly in cell phones---addresses such issues by reducing functionality to the essential needs. Simon thinks that CPU design, previously driven by the personal computing industry, is now driven by electronic devices like cell phones---to the potential benefit of high-performance computing, if the industry seizes the opportunity.

"We will inevitably be running HPC applications on iPhones," he said. "Not literally, but something like this--the type of processors which drive iPhones."

LBNL recently entered into a strategic partnership with Tensilica. The Tensilica processors often used in cell phones are much less computationally powerful than those used in many HPC systems, Simon said, but they are also physically much smaller and have smaller power demands. By placing many such processors in the physical space currently used by a single larger processor, designers could obtain the same or better computational performance with even lower power consumption.

"The processor is the new transistor," Simon said, quoting Chris Rowen of Tensilica, alluding to yet another challenge in terms of programmability.

"We are actually living in a crazy world, if you think about it," Simon remarked. "Here we have systems like [IBM's] Blue Gene/L, with 65,000 processors . . . and how do we program it? Using MPI." He likened the current practice---explicitly decomposing a task, explicitly prescribing where the data go, explicitly programming how messages are exchanged---to programming the 2312-transistor Intel 4004 from 1971 by explicitly specifying what each transistor would do. "It's clear that our MPI paradigm, if you look at this as being the future, is severely limited."

As high-performance computing re-search continues in this "crazy world," researchers at UC Berkeley's Parlab (funded as part of the Intel/Microsoft initiative) will examine programmability, and re-searchers at LBNL will explore alternative approaches to energy-efficient computing--each an issue that must be addressed if multi- and many-core processing is truly to make it to the mainstream. The university centers will receive a combined $20 million from Microsoft and Intel over the next five years for work toward the goal of mainstream parallel computing. Meanwhile, with the LBNL "Green Flash" project also under way, and so many high-performance minds addressing the power challenges facing high-performance computing, it's possible that change really is in the wind.

Readers can find information about LBNL's energy-efficient computing program at http://www.lbl.gov/cs/html/energy%20efficient%20computing.html. Information about Parlab is available at http://parlab.eecs.berkeley.edu/.

Michelle Sipics is a contributing editor at SIAM News.


Renew SIAM · Contact Us · Site Map · Join SIAM · My Account
Facebook Twitter Youtube