Mind Matters
Order by Mail
or Online

Sample Pages

A Critique of Impure Reason

Dreyfus's book What Computers Can't Do: A Critique of Artificial Reason, published in 1972, expanded and developed the arguments presented in "Alchemy and Artificial Intelligence." It can only be described as unfortunate that relationships between the AI community and what in many ways turned out to be one of its most cogent critics should have degenerated into such acrimony. In some cases years in advance, Dreyfus not only anticipated many of the difficulties that AI was to run into, but provided an explanation of why he considered it inevitable that they would happen. Both sides had points worth pondering on, and one can only speculate what other, untried lines of research might have been opened to funding and investigation by a mor   constructive and sympathetically motivated dialogue.

The pattern of early dramatic successes with simple tasks, leading to diminishing returns and eventual disenchantment as attempts were made to extend the method to more complex domains could be accounted for, Dreyfus submitted, by four characteristics of human cognitive ability that enabled them to get around difficulties that programming methods had no way to avoid.

The first of these is that marginal awareness we possess of the situation around us, outside our immediate concern but registering sufficiently to seize attention if a good enough reason dictates-- like the blurry area of peripheral vision that surrounds the highly resolved spot that our eyes are focused on, but applying to all the senses. As an example, Dreyfus gave a situation in a chess game, where a player had begun a description of his conscious thought processes with, "Again I notice that one of his pieces is not defended, the Rook, and there must be ways of taking advantage of this. . . ." How, Dreyfus asked, did the subject notice that the Rook was undefended? The conventional AI answer is, by unconsciously applying heuristics of the kind that programmers were trying to extract from players heads and build into their programs. But Dreyfus took this more as an assertion of faith, for no master-level heuristics had been found. The conscious process that the subject described of looking for various alternative ways to attack the Rook--"counting out," as Dreyfu  termed it, comparable to the tree-searching heuristics implemented in programs--began only after the playe  had "zeroed in" on that part of the board and that aspect of the position.

Analysis of the MacHack program which we met earlier showed that at a tough point in a tournament game the program had calculated for fifteen minutes and weighed up 26,000 alternatives before choosing a move--and quite an excellent one, as it turned out. A human, by contrast, would consider perhaps 100 in a similar situation, 200 at most, with a good chance of spotting something brilliant that the machine had missed. If, as conventional AI theory maintained, the human unconsciously counted out thousands of alternatives in a similar fashion to the machine, applying astoundingly powerful heuristics to get to the point of focusing on the Rook, why would he not simply carry the same process through to completion until the best move jus  pops into consciousness with no demand on effort at all? Why resort to the cumbersome process of having to consciously labor through the last few details?

Four Pillars of Wisdom

Dreyfus's answer was, because the zeroing-in part didn't depend on program-like, heuristically guided searching at all. Rather, an ability to organize perceptions globally causes patterns recognized in the background to suddenly take on a significance that becomes instantly apparent, such as noticing the ticking of a clock when it stops or, the face of a friend when scanning the vaguely perceived faces in a crowd. According to Dreyfus, the inability to give a program such Fringe Awareness accounted for the pattern of early success and later failure in cognitive simulation. In game-playing and the kinds of puzzle problem  solved by GPS, for example, the early successes were attained by working on those parts of the problem in which heuristic searching was feasible; failure set in where complexity reached the level that such global awareness would be necessary to avoid exponentially explosive growth of the search problem.

But how do you write the rules for "Notice the Rook, if it's important"? For a program to decide that a Rook was important it would first have to "notice" it, which means that to give anything a chance to be deemed important everything would have to be examined--like having to constantly scan every element of the visual field -- defeating utterly the purpose of the exercise. It's significant to note that the chess engines of recent years all owe their power to faster, more specialized hardware for extending the search space, not to advances in more "humanlike" evaluation methods. In the latter direction, Arthur Samuel's checkers player was about as far as it went. Dreyfus would perhaps say that was as far as it could go.

The second human faculty that Dreyfus held to be unprogrammable was our Ambiguity Tolerance, which he illustrated primarily with reference to the problems that the attempts at automatic language- translation had run into. As we saw earlier, the order of words in a sentence--its syntax--is not sufficient to decide through formal rules which of several possible parsings is the appropriate one, and neither can the written context--the words surrounding a given word or phrase--be relied on to indicate a writer or speaker's particular meaning. Yet people are generally able to get the intended point unequivocally--"zeroing in" again, on what matters against a background of extraneity that doesn't.

What makes the difference, and what computers can never share, Dreyfus says, is that when people use natural language they do so from the perspective of being involved in a particular situation and pursuing certain goals.  It is this perspective, constantly changing, not precisely stated and in general not stable, which provides the cues needed to reduce the ambiguity to a level tolerable for the task at hand. An instruction like "Stay near me" can mean anything from "Don't let go of my hand" addressed to a child in a jostling crowd, to "Keep within a mile" in the case of a fellow astronaut exploring the moon. Such meanings are never unambiguous in all possible situations. But our shared context awareness makes them sufficiently unambiguous in any particular situation. And despite the chorus of protests from the AI citadels tha  Dreyfus didn't know what he was talking about, were not their later preoccupations with things like part  frames and restaurant scripts an acknowledgment that only with a more humanlike, situational slant would programs have a chance of making sense of anything?

In making his third point, Dreyfus refers to a book on psychological theory, Plans and the Structure of Behavior,12 which begins by quoting from Polya on the role of insight in problem solving:

First, we must understand the problem. We have to see clearly what the data are, what conditions are imposed, and what the unknown thing is that we are searching for.

Second, we must devise a plan that will guide the solution and connect the data to the unknown. The authors then minimize the importance of the first part, or decide not to worry to much about it:

Obviously, the second part of these is most critical. The first . . . is indispensable, of course, but in the discussion of well-defined problems we assume that it has already been accomplishe.

But the whole crux of solving complex problems, Dreyfus points out, lies precisely in grasping the essentials and structuring a plan in such a way that a workable method of solution can be applied. Like GPS's monkey, once the need to get to the banana has been specified and a tool with the requisite properties (the chair) selected from the world of undifferentiated objects, the rest can be handed over to a mechanical procedure that will eventually stumble on the right way of connecting them together. The standard response to thi  criticism is to cite "learning" as the answer, but short of trying everything, it could only mean learning to appl  what was identified as relevant, thereby presupposing what needed to be solved. The only learning project of note at that time was Ed Feigenbaum's EPAM program for studying the association of nonsense syllables--the significance being, in Dreyfus's submission, that mechanized methods had proved effective i  the one situation where any meaning, by design, was rigorously excluded and no form of comprehensio  required. In short, what humans are able to bring to bear at the outset of tackling a problem is the insight necessary for Essential/Inessential Discrimination, as opposed to being stuck with trial-and-error search  This is how they avoid bogging down as the dimensions of the problem grow beyond being well defined and restricted. 

Finally was what Dreyfus called Perspicuous Grouping--that human ability we saw when talking about metaphors, analogies, and family resemblances, to instantly recognize "likenesses" in ways that are relevant to the purpose of the moment and which defy verbal description. We don't, Dreyfus contends, identify such patterns by extracting lists of features and matching them in the way a program must. Rather than assembling perceptions from primitive elements, we seem to go in the other direction, moving from the realm of globally identified concepts down to the level of consciously analyzing detail only when relevance has been established--as when looking for a way of attacking the Rook, or focusing on a visual feature that has already captured our attention.  Our uncanny pattern-recognition ability requires a combination of fring   consciousness, ambiguity tolerance, and insight--all of which Dreyfus puts beyond the reach of digital machines. "It is no wonder, then," he comments dryly, "that work in pattern recognition has had a late start and an early stagnation."

Dreyfus went on to ask how, in the light of these problems, AI researchers were able to persist in their belief that what digital computers do reveals anything about hidden information processes in humans, and that there must be digital ways of performing human tasks. Noting that nobody in the field appeared to be reexamining such questions, he criticized AI as the least self-critical field on the scientific scene: "There must be a reason why these intelligent men almost unanimously minimize or fail to recognize their difficulties, and continue dogmatically to assert their faith in progress." Dreyfus's conclusion was that it had to lie in the force of their assumptions. He went on to identify them essentially as:

-- that the mind operates as a symbol manipulator operating according to formal rules.

-- that all knowledge can be formulated in terms expressible as logical relationships.

-- that the world--or at least, enough of to produce intelligent behavior--can be analyzed into situation-free determinate elements, i.e. a set of facts each logically independent of all the others.

Disputing each one of these, Dreyfus develops the case that it is our difference of situation as embodied beings, immersed from birth in experiencing a world of aims and purposes that creates our perspective out of a potentially infinite reservoir of contexts. Without any comparable subjective views to guide it, a machine must be either confined within a limited domain where general processes remain tractable, or lost in a hopeless maze of permanently trying to interpret and look for connections between everything.

Content © The Estate of James P. Hogan, 1998-2014. All rights reserved.

Page URL: http://www.jamesphogan.com/books/info.php?titleID=15&cmd=sample&sample=87