## Count, Recount, and Fuzzy Math

**January 9, 2001**

**CommentaryPhilip J. Davis**

Breathes there the publication with soul so dead that it hasn't run an article on the tragicomedy---or perhaps the farce---known as USA Election 2000?

The recent turmoil involved counting, recounting, butterfly ballots, the very rules as to which ballots were to be counted, legal challenges, decisions, and further appeals. The nation desperately looked for an "exit strategy," and the proposed exit strategies themselves came under litigation. The tightness of the race reminded me of the incredible "butterfly phenomenon" of chaos theory. Here we have experienced the phenomenon. A butterfly flapped its wings in Florida, and the whole country was affected. All these events reopened for me philosophic thoughts about what I've for a long time called "fidelity in mathematical discourse."

The controversy was about applied mathematics: simple counting-the place where mathematics was born. At the dawn of mathematics, notches were cut in a stick. This was followed by special symbols, which, in turn, led to arithmetic rules. Counting was a physical act done by humans. It became clear early on that when the objects to be counted were too numerous, say 2000 or 3000 jelly beans, the physical process could not be carried out with anything like the absolute fidelity required by the Platonic vision of mathematics. This asserts that there is an answer and only one answer. In the case of Election 2000, we were asked to count to one hundred million or beyond. From start to finish, thousands and thousands of people and machines were involved in the process.

Successive counts (often made by different people) may yield different results. This is called fuzzy math. (I'm using the term in the sense of processes involving mathematics that yield a number of answers when a single answer is required.) There is a common rule of thumb for defuzzing: Do it over again until two successive counts yield identical results. Then exit and adopt. This is the rule used by the old-time grocer who had a pencil behind his ear and moistened it on the tip of his tongue. He added the column of prices first going down, then going up. This is not unlike the exit strategy for computations in numerical analysis: Do it once, do it twice, or do it until the difference between successive computations is less than epsilon. (Of course, numerical analysts can then argue about epsilon.)

We conceptualize idealized sets of objects and work with their symbolic surrogates. We've done it with tally sticks, with pencil and paper. We now use computing machinery to tally and to reckon. Zillions of such operations are carried out and accepted every day; but when push comes to shove with respect to some critical economic, social, or political question, some may cry: "Do it by hand. Hand tallies are more correct."

This may be true. The computer may have been programmed erroneously, or it may have malfunctioned. How does one check the programming or the physical operation of the computer? By doing the calculation over again. And it would be still better if it were done independently and on a different computer. For increased fidelity in a complex mathematical problem, check the output in a case in which you happen to know a "closed form answer."

Thus, counting, the simplest, the most fundamental operation in mathematics, is largely a human impossibility when the numbers are sufficiently large. The simplest idea in the entire corpus of abstract mathematics relates to the intuitive mathematics of the real world in an exceedingly complex way. The value of a large corporation is not determined by counting, among other things, the number of light bulbs in its factories and offices. Standardized accounting procedures are used to arrive at a number. Different procedures may yield different values, and the value accepted in the commercial world may depend on competitive and other considerations. For instance, shall the company pension fund be listed as an asset? Accountants disagree. Released numbers are often targeted for certain audiences and "massaged" to that end. A corporation may release several balance sheets.

Consider the manner of counting the population of the United States, recently the subject of litigation. Mathematics proposes a number of schemes: Each method has pluses and minuses. The ultimate choice is not mathematical: It is political and legal. Everyone agrees that simple counting results in undercounts of certain groups. The Census Bureau now supplies two sets of figures (fuzzy math!): simple counting or counting augmented by sampling. By a unanimous decision of the U.S. Supreme Court, only the simple count can be used to deter-mine congressional reapportionments.

Thus, applied math abounds with fuzzy math, and we have learned to deal with it. But what about pure math? Surely there is no fuzz there. You think not? Let me speak from recent personal experience.

There is an incorrect mathematical statement in *The Mathematical Experience* that has taken 19 years to surface. My co-author, Reuben Hersh, found it recently on a Web site titled Bertleson's Number. If *p*(*x*) denotes the number of primes less than *x*, then *p*(10^9) (Bertleson's Number) is 50,847,478 (or so we state on page 175); on page 213, though, we give *p*(10^9) as 50,847,534. Both numbers may be wrong, but both cannot be right. We did not compute the number ourselves. In the process of writing the book, we copied it from two "reliable" sources. The first value cited came from Hardy and Wright's famous *An Introduction to the Theory of Numbers*. We didn't realize that we had built in a contradiction. Whatever it was, the exact eight-figure value of *p*(10^9) was of little importance to us. We had other rhetorical fish to fry.

The second value mentioned ( . . . 534) is now cited by the Web site author as the correct one. But according to my philosophy, there is no way of determining with *absolute fidelity* the value of *p*(10^9). What we do have is a variety of methods for increasing the probability that a given answer is correct. As mentioned earlier, several independent persons, preferably using different methods and different computer software, can redetermine the value. Errors can be corrected, but how many independent redeterminations are sufficient before one goes into print?

The literature of pure mathematics is so vast that there are many errors in it. Most of them are undiscovered until something important rests on one of them and someone investigates. People have compiled lists of errors. These lists themselves contain errors. Read Imre Lakatos's classic book *Proofs and Refutations*, where the history of the Euler-Poincaré theorem is presented as a comedy of errors. Read my own old article "Fidelity in Mathematical Discourse" (*American Mathematical Monthly*, March 1972) for a taxonomy of types of errors.

The pursuit of mathematics depends on a high level of often-unacknowledged trust and faith. If this is true for mathematics, it is most certainly true for human dealings. Trust has now been corroded on both sides of the political aisle. The law was not able to anticipate or deal with all the consequences of an unavoidable amount of fuzzy math. Hence, the vibratory back-and-forths of rulings that might be called political Parkinsonianism.

One characteristic of applied math at the level of action is that it must have an exit strategy that operates within relevant human time. Pure math cannot have an exit strategy. Its statements are forever open for correction, for improvement, for reinterpretation.

As the election crisis was deepening, President Bill Clinton is reported to have quipped, "The people have spoken; but what have they said?" It can equally be asserted: The mathematicians and the machines have spoken, but what have they said?

*Philip J. Davis, professor emeritus of applied mathematics at Brown University, is an independent writer, scholar, and lecturer. He lives in Providence, Rhode Island, and can be reached at **philip_davis@brown.edu**.*