A Vast Edifice of Fifty-year PredictionsNovember 21, 2000
Charles M. Strauss
The Age of Spiritual Machines. By Ray Kurzweil
Penguin Books, New York, 2000, 388 pages, $14.95.
When my cousin Tony and I first saw the Star Wars and Indiana Jones movies, we---independently---had the same delighted reaction: "These are exactly the kind of movies we would make if we could make movies." Fortunately for us, we never expressed a kind of converse to these thoughts: "But consider how appallingly bad any movies actually made by us would be, since we can't make movies." The book under review suffers from this sort of converse. The author is a world-class inventor, an extremely successful entrepreneur, and obviously a most creative thinker; but his current book, I feel, is not at all up to the standards of a good trade book.
The Age of Spiritual Machines, subtitled When Computers Exceed Human Intelligence, leads the reader on a fast ramble through the author's predictions for technological achievements over the next fifty years or so, with special emphasis on the implications of advancements in computer intelligence. I immediately become wary upon reading the words "computer intelligence." Having been wooed and disappointed so many times by this particular siren song, I have developed strong antibodies that will keep me from ever again listening, let alone succumbing, to the old AI tune.
From the very beginning of the book, the author's style is consistent: short, declarative sentences; few qualifiers; no doubts expressed that "this trend will continue" (page 3). Interposed every so often are a few pages in dialog form, between the author and an imaginary interlocutor named "Molly." Molly appears from her lines to be a bright enough college sophomore, majoring in English or sociology, who has had her head filled with politically correct opinions since early in high school and is going to have to spend four or so years after college emptying her head of all the stuff in there that is simply not true. Plato is considered to be cheating when he gives Socrates straw men to debate; modern authors too should be forbidden this easy rhetorical trick.
The book does have excellent reference sections, including a glossary, footnotes, and voluminous suggestions for suggested reading and Web sites. The editing and proof-reading are superb.
The arguments of the book, not really of a scientific nature, are more like theological arguments. The conclusions are not quantifiable, and so must be considered to be speculative fiction rather than truly grounded in fact. Notice that this view does not mean that the predictions given by the arguments will not turn out to be true. They may---and they may not. But I do not get much of a sense as to which alternative is more likely and why. The author's track record in the field of prognostication is very good (see pages 170-178)---but the examples he cites are predictions made and judged only over a 12-year period. Twenty-year predictions are a lot harder, and thirty-year predictions are harder still. Here I remember the voice of the governor in Leonard Bernstein's Candide snarling, "Well, they all believe what they are screaming. We'll see!"
The entire argument of this book is based on what the author calls "The Law of Accelerating Returns: As order exponentially increases, time exponentially speeds up (that is, the time interval between salient events grows shorter as time passes)" (page 30). Fine, no problem there; except that for the next two hundred or so pages, a vast edifice of predictions for the future is developed, with far too little argument and far too much flat declaration along the lines of "X will be Y in the year Z." For example: "After human capacity in a $1,000 personal computer is achieved around the year 2020, our thinking machines will improve the cost performance of their computing by a factor of two every twelve months" (page 105), and "The human population has leveled off in size at around 12 billion real persons" (page 222). Ordinarily, I would be merely bored by passages reminiscent of the prefaces to the science fiction of my youth, in which the authors use a paragraph or two to set the context for the imaginative work that is to follow. Unfortunately for my temper, I tend to heed Daniel Patrick Moynihan's recommendation to ask anyone who professes a truth two questions: What do you mean? How do you know?
Kurzweil knows, he claims, by applying the law of accelerating returns. The lemmings, the gypsy moths, the bulls in 1929, the English in June of 1914 would all, no doubt, be comforted by this law and its applications. I have three words for this sort of prognostication: World of Tomorrow (at the 1939 Worlds Fair). Remember the clean streets and sparse traffic? Yes, things usually do go on, faster, better, different, but with an obvious continuity to whatever preceded the present---until something happens: a war, a plague, a religious movement, a something that constitutes a change of quality, not just quantity. I observe, in the world at present and in the history I have read, something akin to Stephen Jay Gould's "punctuated equilibrium" or (the modern) Adam Smith's summary of charted stock market figures: "Things tend to keep going the way they were going until they stop." Well, what then is the domain of applicability for the law of accelerating returns?
This book seems to have been written on a planet where malevolence---say, the introduction of harmful entities into a communications network just for the fun of it---is unthinkable. I guess I have just met a lot more bad people than the author (possibly my years in the public schools of Providence, Rhode Island, have something to do with this). Yes, he predicted the Web (page 171), but where were the predictions of computer viruses? And he predicted the fall of the Soviet empire, but where was any mention of Bosnia?
An even greater failing than this Pollyanna approach to the future is the almost complete absence of what I would call "systems thinking": the way one thinks about, models, characterizes, and deals with entities that are made up of other entities. Now I know that the author has a completely firm grasp of systems thinking, if only from his long and outstanding career as an inventor. Yet barely a trace of feeling about how complex the interactions of the component parts of a system can be---especially a naturally evolved system---is present anywhere in this book. A particularly egregious example leaps out at the reader in chapter 6 ("Building New Brains"), especially the section "Reverse Engineering A Proven Design: The Human Brain" (pages 118-129). That's where he lost me or, more exactly, caused me to lose the last shred of my suspension of disbelief.
Well, you have probably deduced by now that I didn't exactly love this book. Let me recommend as an antidote some A.E. Housman or Mark Twain or any other author with an attractive literary style and a salutary dose of pessimism in his soul and his work.
Charles M. Strauss is a senior member of the technical staff at the Charles Stark Draper Laboratory in Cambridge, Massachusetts.