## IMA Celebrates 20 Prodigious Years

**October 31, 2003**

Joining IMA director Doug Arnold (back row, far right) at the two-day conference held in June at the Institute for Mathematics and its Applications are many of the speakers who traced progress in scientific areas advanced by the IMA in its 20 years. Front row, from left: Nancy Kopell, Wim Sweldens, Graeme Milton, and Andrew Odlyzko; back row, from left: George Papanicolaou, Panagiotis Souganidis, Margaret Wright, Charles Peskin, and Lai Sang Young.

**Barry A. Cipra**

"The NSF feels like the proud parent of 20-year-old," said William Rundell, director of the Division of Mathematical Sciences at the National Science Foundation. The occasion was a banquet, part of a star-studded, two-day conference, held June 6 and 7 at the University of Minnesota to celebrate the 20th anniversary of the Institute for Mathematics and its Applications---the NSF offspring in question. Also speaking at the banquet were the IMA's directors, past and present: Hans Weinberger (1982-87), Avner Friedman (1987-97), Willard Miller (1997-2001), and Doug Arnold (2001-).

The IMA opened shop in the fall of 1982---the brainchild of Weinberger, Miller, and George Sell, who had responded to an NSF request for proposals in 1979. "I was rather skeptical of the whole idea," Weinberger recalled, speaking at the banquet. Indeed, once the trio had submitted their proposal, "I thought that was the end of it."

Instead, they found themselves on the short list, and scheduled for a site visit in January 1981. Well aware of just how cold Minnesota can be in January, Weinberger once again "thought that was the end of it." As it happened, the site visit coincided with a warm spell; the IMA was approved and given a year to gear up, with Weinberger as director and Sell as assistant director (a position in which he would be succeeded by Miller and, currently, by Fadil Santosa). "I remember that as a rather hectic year," Weinberger said. "They were still laying rugs when the first program started."

*Hans Weinberger (center), the first IMA director, was on hand for the celebration, along with (from left) Bill Kamp of Axis Group Partners, Peter Rejto, a professor of mathematics at the University of Minnesota, Charles Peskin, who opened the conference with an impressive community lecture on his now three-dimensional model of the human heart, and Hans Othmer, also of the Minnesota mathematics department. *

Over the last two decades, the IMA has hosted thousands of visitors and sponsored scores of postdocs. With its emphasis on interdisciplinary research and outreach, "the IMA is a prime example of what applied mathematics should do," says Hans Kaper, a program officer in NSF's Division of Mathematical Sciences. "It's particularly valuable for young people," adds Deborah Lockhart, acting executive officer of DMS. The IMA enables postdocs to meet and work with experts in their field. "You just can't beat that," Lockhart says.

Gene Wayne of Boston University, who was one of the first IMA postdocs, agrees. "It was an incredibly valuable experience for me," he says. "I was just a fresh PhD. People were extremely willing to share their time and ideas." (Wayne was a co-organizer of the anniversary conference, along with Barbara Keyfitz of the University of Houston and David Dobson of the University of Utah.)

"The IMA has been so successful, the NSF has apparently begun paving the nation with carbon copies," Weinberger told the banquet audience. With good reason to be happy about this turn of events is Avner Friedman, who now directs the NSF-funded Mathematical Biosciences Institute at Ohio State University. In his own remarks, Friedman recounted the genesis of the IMA's industrial postdoc program. To drum up sponsorship, he first convinced three managers at 3M to contribute $15,000 apiece, the maximum an individual could promise without having to seek higher approval. The managers at Friedman's next stop, Honeywell, wanted to know what 3M was doing. "I'm glad you asked," Friedman recalled saying.

**Hearts and Minds**The scientific program surveyed many of the topics covered by IMA programs in its first twenty years. Charles Peskin of the Courant Institute opened with a public lecture on the detailed models of the human heart that he and colleagues, especially David McQueen, began to develop even before the IMA came into existence. Peskin's early work earned him the moniker, coined by Marc Kac, "the man with the two-dimensional heart." The model, however, has been fully three-dimensional since around 1987. Based on the approach to computational fluid dynamics developed by Peskin, called the immersed boundary method, the model keeps track of the mechanical forces exerted on and by the fibers constituting the heart, along with the interaction of the fibers with the surrounding fluid.

In simulations, the fibers contract on command, through stiffening of their spring constants and shortening of their resting lengths; the result is a convincing heartbeat. Peskin, though, is currently working to improve the model by incorporating some electrophysiology into what has so far been a purely hydromechanical construct: He plans to add the Hodgkin-Huxley equations, which describe the electrical properties of cell membranes, to the established mix of Hooke's law and the Navier-Stokes equations.

Actually, Peskin explains, the model needs to keep track of two separate kinds of current and voltage: intracellular and extracellular. The intracellular current-voltage relationship is highly anisotropic, being much more strongly influenced by the local heart muscle fiber direction than is the case extracellularly. Meanwhile, transmembrane current, governed by an especially complicated version of the Hodgkin-Huxley equations, acts as a source of extracellular and a sink of intracellular current. Peskin explained how the immersed boundary method can be generalized to handle the electrophysiology of the heart as well as its fluid and tissue mechanics.

For help in this new direction, Peskin might look to Nancy Kopell of Boston University, who spoke on dynamical systems in neurobiology. Kopell has long been at the forefront of research in mathematical neuroscience, for which she offers a three-word summary: "analyzing the intractable."

Neurobiology holds "many, many opportunities for mathematics," Kopell says. This stems in part from progress in sensing technology. It's easy now to get lots of data about the brain. What's still hard is drawing implications from the data. "That's where mathematics will be crucial," Kopell says.

The challenge is levels. Studies of neural processes run the gamut from subcellular and single-cell systems, to small networks, to larger but still local networks, to functional networks that correspond to recognizable neural tasks, to animal behavior as a whole. Traditionally, researchers have used different techniques for each level without much crosstalk among methods and levels. That's now changing, Kopell says. "What's new here is the sense that we can go from top to bottom."

In her own research, Kopell has focused on rhythms in the brain. These synchronized firings of neurons at characteristic frequencies have long been associated with cognitive states, such as relaxation (alpha waves), motor planning (beta), attention (gamma), and learning (theta). Diseases like Alzheimer's, schizophrenia, and Parkinson's are associated with specific abnormalities in these rhythms. What's not clear, Kopell says, is whether any of the associations mean anything. Brain waves could be coincidental, or vestigial, or even artifactual, with no connection to what the brain is built for. But "many of us would be astonished" if that were the case, she says.

Among the issues Kopell discussed is the role of inhibition---specifically, suppression of the firing of a neuron by the firing of another---in synchronizing local networks. Results of recent work with Bard Ermentrout of the University of Pittsburgh may explain why certain networks combine inhibitory cells and "gap" junctions (synapses that couple electrically rather than chemically). In brief, the electrical coupling makes a heterogeneous collection of cells behave in a sufficiently homogeneous fashion that the decay time of inhibition determines the period of the rhythm.

As researchers tackle larger networks, new mathematical problems will arise, Kopell says. With the right mathematics, she believes, the dynamical systems approach "is going to be a very powerful tool for understanding the workings of the brain."

**Analyze This!**Panagiotis Souganidis of the University of Texas at Austin, who was one of the early IMA postdocs, surveyed a subject that's just about exactly the same age as the IMA: viscosity solutions. This approach to a large class of partial differential equations debuted in a 1981 paper by Michael Crandall, then at the University of Wisconsin (now at the University of California at Santa Barbara), and Pierre-Louis Lions of the University of Paris. Souganidis himself has been one of the people instrumental in developing the theory.

*Peter Bates (left), chair of the mathematics department at Michigan State University, and Panagiotis Souganidis of the University of Texas, Austin. An early IMA postdoc, Souganidis returned as an invited speaker at the 20th-anniversary conference; his talk was on viscosity solutions to a large class of partial differential equations. *

Roughly speaking, viscosity solutions are the "correct" class of solutions to fully nonlinear first- and second-order partial differential equations. By "correct" solutions, researchers mean those for which they can prove existence, uniqueness, and stability theorems. The theorems are not mere formalities: They are crucial for applications. Stability theorems, for example, provide a simple way to prove convergence of approximations, thereby giving confidence in numerical solutions.

Viscosity solutions were originally motivated by the theory of optimal control and differential games; Crandall and Lions developed them for the first-order Hamilton-Jacobi equations. The theory took a while to catch on, Souganidis says. "In the beginning there were not many applications, so people did not pay much attention to it." But as the theory---especially for second-order equations---took shape, applications began emerging, "and then it took off." Applications today range from mathematical finance to image processing to phase transitions to the study of turbulent combustion.

New applications of the theory include stochastic partial differential equations-especially nonlinear, first- and second-order stochastic PDEs with multiplicative noise---a.k.a. Brownian motion. (While there is an extensive literature for the linear and quasilinear cases, Souganidis says, there is "very little, if anything, known in the nonlinear setting.") In a pair of papers in 1998, Lions and Souganidis laid out the rudiments of stochastic viscosity solutions and identified as applications pathwise stochastic control (think interest-rate models in mathematical finance) and phase transitions and front propagation in random media. Among the many open problems are questions regarding the existence of invariant measures and the random properties and asymptotic behavior of solutions. "There are fascinating problems in the stochastic theory," Souganidis says.

Viscosity solutions could come in handy in the differential equation-dense study of composite materials. Graeme Milton of the University of Utah described progress in the subject, which he calls "an old field full of new surprises." The study of composite materials is a topic of longstanding interest for the IMA. Among early IMA offerings were programs and workshops on the mathematics and physics of disordered media (1983), homogenization and effective moduli of materials and media (1984), and random media (1985). During the 90s there were programs on microstructure and phase transitions (1990), waves in random and complex media (1994), mathematical methods in materials science (a special year in 1995-96), and continuum methods and nonlinear PDEs (1999). Most recently, there was a special year on mathematics in geosciences (2001-02).

Composite materials are ubiquitous in nature, Milton points out. Geology offers examples in the form of layered and porous media, polycrystalline rocks, and even sea ice. Biology offers yet more: bones and shells, lung tissue, wood, tendons, and colloidal suspensions, such as milk. Human efforts to mimic nature's accomplishments have led to industrial applications, such as the creation of lightweight materials for aerospace, ceramic-metal composites---known as cermets---for solar energy, and fibrous materials for thermal insulators.

In the past, the development of "designer" materials with novel properties amounted mainly to trial and error: Put two or more pure substances together and see what you get. Mathematical analysis and computer models will surely speed things up---and may do much more.

"In this century, we will have greater flexibility to produce designer composites tailor-made to meet specific needs," Milton says. In particular, he says, mathematics can reveal what's possible (or impossible) and give insights into what features of the microstructure are important when putting materials together.

"What's not obviously forbidden may actually occur," Milton says.

Even some things that would seem obviously forbidden can-and do- occur: "anti-rubber," which thickens when pulled, for example, or a mixture of two metals that contracts when heated, even though each metal individually expands. Designing such materials, especially with an eye on optimizing their desirable properties, calls for a detailed understanding of how the geometry of the microstructure influences the overall response to applied fields.

Milton and other material-minded mathematicians are providing that understanding. He and Utah colleague Andrej Cherkaev have shown how two materials---one strictly stiff and the other purely compliant---can be combined to give any elastic behavior desired; "concrete rubber," for example, is unyielding when force is exerted from all sides but compliant when squeezed on only two sides. In a 2002 paper (*SIAM Journal on Applied Mathematics*, Vol. 62, No. 6), Milton and Knut Solna (now at the University of California, Irvine) showed that an electromagnetic signal can travel faster---and at the same time undergo less dispersion---in a composite than in either constituent.

Despite the advances, a lot of unexplored territory remains, Milton says. Indeed, "the relation between microscopic and macroscopic properties is still poorly understood." A scientific composite of mathematics and materials science will likely enhance the development of each. The same is true of mathematics and all its applications---as the IMA has spent the last 20 years proving.

*Barry A. Cipra is a mathematician and writer based in Northfield, Minnesota.*