[Originally published in the Santa Cruz, California weekly paper, the Express, vol. II no. 28 (September 9, 1982): 11-12]
September's Popular Science magazine contains a story with the dramatic title "Expert Systems - Computers That Think Like People." It sounds like quite a breakthrough: "An expert system . . . is a pipe-puffing savant with deep experience in a field of human knowledge. It can size up a situation and draw on a lifetime of experience to make reasoned judgements . .handles ideas in the same way that humans do . . . it contains human wisdom." But one's expectations take a nosedive when one realizes that this alleged genius is nothing more than an electronic symptom catalogue. Yqu have a runny nose? This computer will tell you that perhaps you have a cold. Fingers fall off? Leprosy is the answer. It may be a useful contraption, but it doesn't think any more than does your phone book. Neither does any other computer.
The editors of Popular Science apparently haven't learned anything since October 1944 when they ran an article on the IBM Mark I entitled "Robot Mathematician Knows All the Answers" - although D. R. Hartree, one of the great computer pioneers of the'30s and '40s, was incensed enough by the "electronic brain" hooha to write the London Times about how such talk "ascribes to the machine capabilities which it does not possess." Unfortunately, the Popular Science attitude is far more typical of the ostensibly scientific field of Artifical Intelligence, where wild exaggeration, pie-in-the-sky predictions, and incredibly sloppy thinking have become traditions.
The belief that mind could be reduced to machinery was the sole province of scientific fringe types for 20 centuries, but a few decades ago the rise of the electronic calculator (and its attendant swarm of mythologizing publicists) gave great impetus to the idea and made it respectable.
In spite of the lively look of electronic circuits, even the most powerful computers boil down to simple on/off switches arranged to perform arithmetic, though some of their clergymen, such as MIT's Marvin Minsky, claim that definition is "dreadfully misleading." However, the claim that mind is machinery is - if false - the most misleading one ever made. As the great philosopher-critic Joseph Wood Krutch wrote, "it banishes from the universe not only God but humanity itself." So, anyone still on the side of humanity may want to check out this admittedly difficult subject before abandoning ship. Shall we?
The belief that minds are digital-computer programs is the "strong" Artificial Intelligence (AI) doctrine, as distinguished by Al critic John R. Searle from "weak" Al, which simply holds that computer models may be useful in some discussions of the mind. Nothing wrong with that. But because the strong Al advocates claim the mantle of scientific respectability (and because the stakes are so high), they invite whatever criticism they are vulnerable to. And that's a lot.
The general reader has most likely met strong Al in Douglas R. Hofstadter's Godel, Escher, and Bach (GEB), a charmingly well-written Pulitzer Prize winner self-described as "A Metaphorical Fugue on Minds and Machines." Its main support for strong Al - which I'll call "computerism" - is reference to Alan Turing's 1950 paper "Computing Machinery and Intelligence." Hofstadter can't find praise enough for this paper, which he calls "prophetic," "humorous" and "ingenious," and in this he's been joined by most of the artificial intelligentsia over the years since Turing's "classic" work, who tend to simply cite Turing on the rare occasions when they feel a need to justify their premises. Since most computerists take the contents of Turing's paper as much for granted as the multiplication tables, we have to take a closer look at it.
Turing uses a privilege most Alers have exercised ever since: the Proof by Imperial Redefinition. Admitting that the question under discussion is "Can machines think?" Turing wants precise definitions of those words while avoiding their "normal use," since "this attitude is dangerous." Because normality won't do, Turing refuses to define "machines" or "to think," and replaces the question with one "closely related to it": can a human interrogator be fooled by a machine into thinking he is talking to a person? Turing wants to ask whether a machine can appear to think, which ain't the same kettle of fish, and the rest of the paper tries to make this seem equivalent to asking whether machines can really think.
Trying to pin Turing down on this is like grabbing at a greased pig. He does get around to defining "machine" as a digital computer (contrasted with "actual human computers," a description of humans for which he gives no justification [note from 2001: I most likely misunderstood Turing's use of the pre-1950 sense of the word, which did in fact apply to professional human arithmeticians rather than to machines]), and eventually admits that "the original question 'can machines think?' I believe to be too meaningless to deserve discussion." In other words, he thinks the word "think" is meaningless.
That's quite a pill to swallow in itself and it makes his next comments even odder: he predicts that by the year 2000 one "will be able to speak of machines thinking without expecting to be contradicted." Strange that Turing should expect his Future Man to use that meaningless word "think"; apparently Turing believes thought is an illusion produced by digital calculation. But he never says it in so many words, and such dishonesty is bizarre for a man claiming to be scientific. Perhaps he only wants a way out for saying that "I think..." - which otherwise sounds pretty silly.
Leaving the weird semantics aside, Turing confronts the argument from consciousness: people are conscious, machines aren't, and that's that (for a more developed version see Krutch's excellent Measure of Man). Turing asks how can we be sure that a person thinks (again he uses the "meaningless" word), complaining that "the only way to know that a man thinks is to be that particular man" and that this is solipsism, the belief that only oneself exists.
The Imperial Redefinition is upon us again, but this time Turing seems unaware of it himself, He's defining "to know" as "to know with absolute certainty," and, of course, only your own consciousness can be absolutely certain. When Turing then settles for "the polite convention that everyone thinks," he is setting us up to accept the appearance of consciousness as the reality.
Hofstadter supports this simplistic behaviorism both in GEB and in his more recent The Mind's I, where he dictates that "any proper scientific account of the phenomenon of consciousness must inevitably take this somewhat doctinaire step of demanding that the phenomenon be viewed as objectively accessible." His co-author, Daniel C. Dennett, actually writes of resurrecting the Behaviorist movement, though Hofstadter doesn't himself.
The point is, it wouldn't be good enough for the computerists to hand us a machine and defy us to prove it wasn't conscious: the burden would be on them to prove that it was. Searle has exposed the weakness of the Turing test with a simple thought-experiment: suppose you were locked in a room with Chinese symbols on paper and a rulebook in English telling you how to match them, so that you could push the right symbols out through a slot in response to symbols pushed in, and suppose the people outside called their symbols "questions" and yours "answers." Would that mean you understood Chinese?
Hofstadter's lame but angry response is that it makes no sense "to think that a human being could do this." He's not only fudging but being hypocritical too, since his own writing swarms with preposterous thought-experiments (for example, imagining what it would be like to be "beamed," Star Trek style, from Mars to Earth). He and Dennett then insist that the "system" of the room as a whole understands Chinese.
Turing's paper cleared the way for a paroxysm of joyful guff about the coming age of mental machinery. Herbert Simon, one of the most prominent workers in the field, said in 1957 that "there are now in the world machines that think, that learn and that create" - not too good for a Nobel laureate, especially since he's still trying to build them. Minsky, who's been at it as long as Simon, also still believes that the foreseeable future holds machines smarter than us (he says, "If we are lucky, they might decide to keep us as pets" - so why is he trying to build them as fast as he can?). Actually, no setbacks ever discourage the true-blue computerists, but more on that later.
Four basic projects occupied the euphoric Alers of the 1950s: game playing, language translation, problem solving and pattern recognition. By the early '60s all of them had flopped. Chess programs could not win if they tried to imitate the human player's "zeroing-in" on an interesting situation before counting out possible moves; language translators could handle syntax but not semantics (one translator told Krutch his machine was "definitely" more like a slide rule than a brain); problem solvers could not themselves define a problem in the first place; and pattern recognition in humans turned out to be immediate perception of the whole, not building up from details like a machine.
In normal science, of course, failure means the hypothesis is wrong and you have to start over - but as you may have guessed, Al is far from normal science, and the reasons for the unfounded optimism go deep into the computerist faith.
The argument is a lot older than the Turing paper. Ever since the Greek materialist Democritus, some eccentric minority has believed the human mind to be explicable in mechanistic terms. But significantly, the first-rate scientists weren't among them. Descartes, Pascal, Liebniz (the latter two inventors of calculators), Maxwell, Ohm, Gauss, Ampere, Kelvin and many others dismissed the idea from square one - and the fact is even the founders of the 20th-century computer were on their side: Babbage (its father
[p. 12]
through the 19th-century Analytic Engine), Von Neumann (who replaced rewiring with programming), Vannevar Bush (builder of the first modern computer) and many others.
One of the computerist's worst flaws is his tendency to ignore these men when debating the possibiity of mechanical thought. In one of his fantasy-dialogues in GEB, Hofstadter has Babbage say, "I would enjoy nothing more than working with your exceIlent Theme (AI)." The real Babbage said that "the machine is not a thinking being, but simply an automaton which acts according to the laws imposed upon it."
Today as before, the computerists are fringe figures and popularizers like Robert Jastrow, a former NASA engineer who's decided that the next step in evolution must be artificial minds superior to our own. This idea was revealed to him one day as he watched an IBM 360 and its fourth-generation replacement: "Suddenly I became aware that powerful forces were at work . . . man would be able to create a thinking organism of quasi-human power - a new form of intelligent life . . . a nonbiological intelligence, springing from the human stock, and destined to surpass its creator." In short, dead life.
Jastrow might have come down a little if he'd actually bothered to read some of IBM's own manuals, such as the one for the 650: "A computer is not a giant brain, in spite of what the Sunday supplements and science fiction writers would have you believe. It is a remarkably fast and phenomenally accurate moron. It will do what you tell it to do - no more, no less."
But the Jastrow Express was rolling and nothing seems likely to slow him down now, not even the difference between machinery and flesh-and-blood (which he considers "not essential to life").
Jastrow's loopy notions are nothing to worry about - though the apparent popularity of his books certainly is - but he has a common foundation with other computerists: the assumption that neurons are binary computer switches (which he calls "silicon neurons"). Hofstadter and the other more orthodox computerists take the "brain equals circuitry" assumption for granted when writing about mind-as-program; not one of the 27 selections in The Mind's I is about our knowledge of the physical brain. Unfortunately for Hofstadter & Co., it's not that simple.
Frances Crick (of double-helix fame) summed up the failure of the brain-computer analogy in the brain issue of Scientific American (September, 1979): "In a computer information is processed at a rapid pulse rate and serially. In the brain the rate is much lower, but the information can be handled on millions of channels in parallel. The components of a modern computer are very reliable, but removing one or two of them can upset an entire computation. In comparison the neurons of the brain are somewhat unreliable, but the deletion of quite a few of them is unlikely to lead to any appreciable difference in behavior."
A number of people, including Louis Pasteur, have continued to live their normal mental lives after losing half the brain. Crick's view was stated just as strongly in 1951 by the great neurophysiologist K. S. Lashley, who proved that memory is non-localized: "Descartes was impressed by the hydraulic figures in the royal gardens and developed a hydraulic theory of the action of the brain. We have since had telephone theories, electrical field theories and now theories based on the computing machines . . . we are more likely to find out how the brain works by studying the brain itself . . . than by indulging in far-fetched physical analogies."
The mind-as-program theory requires that the physical brain must not be intimately intertwined with mental effects - but it is. A certain kind of injury wipes out the ability to recognize faces, though the victim can still describe them and match photos of the same face from different angles. An odd example is found in violinist Fritz Kreisler, who could not talk for some time after a head injury except in Latin or Greek. Similar examples produced by surgery or drugs are too well known to bother repeating. Even stranger territory is the mind's influence on the body, as when a "fatal" cancer is defeated by positive willpower.
Modern physicists often don't believe that biology is comprehensible in their terms anyway. Neils Bohr, for one, insisted that living molecules are immune to exhaustive study because observation techniques have to create fatal disturbances. But the popularizers, and even some of the experts, don't know enough about modern physics to understand its viewpoint. One neurologist, who wrote that he hopes "the brain's functions are orderly and capable of being understood . . . without appeal to unknowable, supernatural processes," sounds just like Hofstadter insisting that consciousness shouldn't be elevated to any 'magical,' nonphysical level."
Both men are still stuck in the 19th century, where confident scientists dismissed the notion that something inherently incomprehensible might be real. But three generations of physicists since then have known that the essence of the "material" universe really is unknowable. Matter equals energy, the vacuum is unstable, and the universe of Newtonian marbles bouncing dialectically off each other is gone forever.
When a computer scientist makes sweeping pronouncements, laypeople may not want to argue. After all, he's an expert, right? Well . . . maybe an expert on building and programming computers. However, the issues are bigger than that, and they include the hardest problems in philosophy, physics and biology. Just being associate professor of computer science at Indiana University (like Hofstadter) doesn't make you an expert in those areas too.
If you'd like to be a little more expert in this area try What Computers Can't Do by Hubert L. Dreyfus, and Brain, Mind and Computers by Stanley L. Jaki.
Computer engineers are understandably impressed by themselves - almost everyone thinks the computer whizzes are the Coming Thing, and listens religiously to whatever they say. But they have the engineer's obsession with gadgetry, and their conviction that more gadgetry is the way.
According to the San Jose Mercury, Apple founder Steve Wozniak "is convinced that the computer is going to bring us together in this world," and there may be no better answer to such simple faith than to just let it die a natural death.