## Letters to the Editor: One Hundred Digits: "Only a Means, Not the Objective"

**January 30, 2003**

*Nick Trefethen's 100-Digit, 100-Dollar challenge (SIAM News, January/February and July/August 2002) continues to attract interest. What follows are the responses of three challenge winners to Joseph Keller's letter to the editor (December 2002) expressing surprise at the absence of proof of the correctness of the answers.*

*To the Editor:*

The Trefethen problems were fun precisely because they were judged only on the actual numbers produced. Finding rigorous upper and lower bounds is a different skill, certainly a very demanding one, but dare I say it, a rather tedious one. Take, for instance, the celebrated result of Archimedes on ?. Once one has understood the basic idea, there is nothing very exciting in the third or fourth iteration of it.

Would a contest requiring proof of correctness have attracted so many entries? Would those entries have moved the problem setter to comment on the "sheer joy" that the contestants found in solving the problems? Would the setter have been able to verify the solutions in a reasonable amount of time?---*Dirk Laurie, University of Stellenbosch, South Africa.*

*******

*To the Editor:*What kind of proof does Professor Keller have in mind? Certainly not a Principia Mathematica one. A proof given by a working mathematician is a line of arguments which by certain standards of rigor leads from the unknown to the known, the new to the old, the difficult to the easy. There has always to be a rather large base set of results which he takes for granted, which would be unreasonable to question, which he does not take care to check. He can do so because many people have checked these results and time has gone by. He can get in trouble if such a result turns out to be unreliable later on, or if the standards of rigor change.

A solution to the 100-digit challenge or to a problem of scientific computing in general requires just the same skills and habits. Inventing a computational method for a problem which at first seems to be unfeasible by straightforward approaches is also a transformation of the unknown to the known, of the difficult to the easy-using the same standards of rigor as above. However, the base you are using is different. You use known algorithms of known behavior, you account for unknown constants by varying precision and discretization parameters, you rely on certain software packages and hardware. Software and hardware are used and checked by many people, and certainly they turn out to be unreliable now and then. You might take your precautions, but checking this base by a rigorous proof would be completely unreasonable.

The mathematically interesting and challenging part of the 100-digit challenge is the invention of computationally feasible methods which are suited for the required accuracy. The actual digits are only a certificate of the success which makes it particularly easy for a possessor of such a method to check that somebody else really also has invented such a method. The digits are only a means, not the objective.---*Folkmar Bornemann, Munich University of Technology, Germany.*

Interested readers can find a careful discussion of the accuracy and correctness of each solution in "Short remarks on the solution of the SIAM 100-digit challenge."

*******

*To the Editor:*The points about proof and correctness made by Joe Keller and Folkmar Bornemann are very valid ones; as computer scientists we know that we are always in danger of producing results that are wrong. This has become a huge subdiscipline in CS, which unfortunately has not yielded concrete and universally accepted results. BTW, who verifies that the proof is correct?

What I consider proof is much simpler and much more effective than anything else: At least 19 people/groups, using independent methods, arrived at the same results. No proof will give me more confidence than that.

**A Software Engineering Lesson**After the contest results were announced, I gave a lecture on the results in one of our courses in scientific computation. I focused on a small subset of the problems and showed some of the nice graphics. Then I posed a question: "You write a long program, you compute for hours, you get a result. How can you gain confidence that your result is correct?" This is particularly important when we have no intuition about what the results should be. During the lecture I proposed the following ideas (most of which Robert Israel and I used ourselves!).

(a) Solve the problem twice (different people or the same people but different programming languages). A lot of silly bugs are caught even if the same person codes the problem in two different languages. (We used Maple and C; about half of the problems were solved by both Robert and me.)

(b) Use two mathematically different methods. Compare the results for small instances of the problem in case one of the methods is substantially slower than the other. We solved the partial differential equation problems exactly and by approximate integration techniques.

(c) Solve small instances that can be verified by hand or with other independent tools (which may not be able to handle the big problems). Parameterize the problem if necessary. This way, we gain confidence that the method is correct. This was very applicable for the linear algebra problems.

(d) Approximate the answer, for example using Monte Carlo techniques. We may get only a couple of digits of accuracy, but we are using MC as confirmation, not to gain accuracy. I used MC simulation to verify the biased-flea problem.

(e) Exploit some special property of the problem. (This is very ad hoc, but if we are alert we can find some nice verifications.) In this case, the photon-reflection problem is self-verifiable. If you find an answer, use the same program to send the photon back, and it should arrive at the origin.

Notice that all of the above "increase confidence," but are not proofs. This is fine for me.---*Gaston Gonnet, ETH Zurich, Switzerland.*