| 
  • If you are citizen of an European Union member nation, you may not use this service unless you are at least 16 years old.

  • You already know Dokkio is an AI-powered assistant to organize & manage your digital files & messages. Very soon, Dokkio will support Outlook as well as One Drive. Check it out today!

View
 

What Computers Can't Do, Dreyfus

Page history last edited by Leland McCleary 15 years, 6 months ago

The symbol-grounding problem has been recognized for a long time. In 1972, over thirty-five years ago, philosopher Hubert L.  Drefus published What Computers Can't Do, a critique of the then hyperbolic first generation of Artificial Intelligence  research.  At the time, he was roundly attacked, but many of his critiques have stood the test of time. In today's terms, what he was discussing was the symbol-grounding problem, the vast difference between c-representations and m-representations, the need for robots to have autonomy, to be able to learn from experience and to categorize their worlds in interaction among themselves, precisely the things we find today's leading robotics researchers concerned with. Very much ahead of his time. The Kelvin Smith Library has both the original book and the 1992 revision, which includes an extensive Introduction summarizing developments in AI in the intervening twenty years after the book was first published.

 

Dreyfus,  Hubert L. What Computers Can't Do. [KSL stacks Q 335 .D74 1972]

Dreyfus,  Hubert L. What Computers Still Can't Do. 1992. [KSL stacks Q 335 .D74 1992]

 

The following is from the  Wikipedia. You will find much that sounds familiar:

 

Dreyfus's criticism of AI

 

Dreyfus's critique of artificial intelligence (AI) concerns what he considers to be the four primary assumptions of AI research. The first two assumptions he criticizes are what he calls the "biological" and "psychological" assumptions. The biological assumption is that the brain is analogous to computer hardware and the mind is analogous to computer software. The psychological assumption is that the mind works by performing discrete computations (in the form of algorithmic rules) on discrete representations or symbols.

 

Dreyfus claims that the plausibility of the psychological assumption rests on two others: the epistemological and ontological assumptions. The epistemological assumption is that all activity (either by animate or inanimate objects) can be formalised (mathematically) in the form of predictive rules or laws. The ontological assumption is that reality consists entirely of a set of mutually independent, atomic (indivisible) facts. It's because of the epistemological assumption that workers in the field argue that intelligence is the same as formal rule-following, and it's because of the ontological one that they argue that human knowledge consists entirely of internal representations of reality.

 

On the basis of these two assumptions, workers in the field claim that cognition is the manipulation of internal symbols by internal rules, and that, therefore, human behaviour is, to a large extent, context free (see contextualism). Therefore a truly scientific psychology is possible, which will detail the 'internal' rules of the human mind, in the same way the laws of physics detail the 'external' laws of the physical world. But it is this key assumption that Dreyfus denies. In other words, he argues that we cannot now (and never will) be able to understand our own behavior in the same way as we understand objects in, for example, physics or chemistry: that is, by considering ourselves as things whose behaviour can be predicted via 'objective', context free scientific laws. According to Dreyfus, a context free psychology is a contradiction in terms.

 

Dreyfus's arguments against this position are taken from the phenomenological and hermeneutical tradition (especially the work of Martin Heidegger). Heidegger argued that, contrary to the cognitivist views on which AI is based, our being is in fact highly context bound, which is why the two context-free assumptions are false. Dreyfus doesn't deny that we can choose to see human (or any) activity as being 'law governed', in the same way that we can choose to see reality as consisting of indivisible atomic facts...if we wish. But it is a huge leap from that to state that because we want to or can see things in this way that it is therefore an objective fact that they are the case. In fact, Dreyfus argues that they are not (necessarily) the case, and that, therefore, any research program that assumes they are will quickly run into profound theoretical and practical problems. Therefore the current efforts of workers in the field are doomed to failure.

 

Given that Dreyfus has a reputation as a Luddite in some quarters, it's important to emphasise that he doesn't believe that AI is fundamentally impossible; only that the current research program is fatally flawed. Instead he argues that to get a device (or devices) with human-like intelligence would require them to have a human-like being-in-the-world, which would require them to have bodies more or less like ours, and social acculturation (i.e. a society) more or less like ours. (This view is shared by psychologists in the embodied psychology (Lakoff and Johnson 1999) and distributed cognition traditions. His opinions are similar to those of robotics researchers such as Rodney Brooks as well as researchers in the field of artificial life.)

 

Daniel Crevier writes: "time has proven the accuracy and perceptiveness of some of Dreyfus's comments. Had he formulated them less aggressively, constructive actions they suggested might have been taken much earlier."

 

Comments (0)

You don't have permission to comment on this page.