Computers And Society
David D. Thornburg
Innovision
Los Altos, CA
Further Ramblings On The Mind…
When I first started reading Douglas Hofstadter's book, Goedel, Escher and Bach, I thought I would be lucky to finish reading it by 1990. While the book is fascinating and I pick it up from time to time, I have had to set it aside for more pressing matters. It was thus with some trepidation that I bought a copy of The Mind's I, a recently published book (Basic Books) by Douglas Hofstadter and Daniel Dennett.
Hofstadter's field is artificial intelligence, and Dennett's is philosophy. Dennett recently published a collection of his essays on epistemology (Brain-storms, Philosophical Essays on Mind and Psychology, MIT Press). It appeared that these two powerhouse thinkers decided to collaborate on a book which covered an area of immense interest to each of them—the nature of the mind.
At first glance, Mind's I appears to be a collection of articles from various sources, each of which deals with one perspective on the concept of the mind. Hofstadter's and Dennett's notes after each article provide a cohesive framework which helps the book hang together. For example, Alan Turing's landmark article "Computing Machinery and Intelligence," in which the famous Turing test is described, is followed by "The Turing Test: A Coffeehouse Conversation," an article Hofstadter first published in Scientific American.
The Turing Test
Turing's test, in its simplest form, has an experimenter sitting at two terminals—one of which is connected to a computer and the other of which is connected to a similar terminal manned by another human being. The experimenter is free to direct questions through each terminal and is supposed to deduce, from the responses, which terminal is connected to the computer. Turing suggested that if the experimenter is not able to do this reliably, then we can say that the computer is, in fact, thinking.
In Hofstadter's article, the issue is raised as to whether a good simulation of thinking is the same thing as thinking itself. This theme recurrs several times in the book and is not easily answered.
The collection of articles in this book cover the concept of the mind from a multitude of approaches. Hofstadter and Dennett provide a balanced picture. The strict reductionist view of life and mind resulting from a seething molecular soup in which small units, accidentally formed, are subjected to fierce competition for resources with which to replicate, is presented by an excerpt from Richard Dawkin's book, The Selfish Gene. A more mysterious quality for the mind is suggested by Harold Morowitz's article "Rediscovering the Mind" which first appeared in Psychology Today. One cannot help but be struck by the tremendous diversity of opinion expressed in this book. There is something to please and infuriate any reader, regardless of his or her philosophical leanings.
The function of this book is less to present a particular view than to raise the level of conversation about the topic. After all, it is senseless to ask if machines can think when we have yet to agree on just what thinking or consciousness is.
Dennett's book, Brainstorms, has a different goal. The collection of essays in this book are designed to elucidate Dennett's own philosophical view of the mind—a view which is aided by the experimental evidence being accumulated in many fields. His theory differs from other models in important ways. The physical model of the mind, for example, implies that when two creatures have the same thought in common (e.g., the belief that snow is white), then they have something physical in common too (their brains are in the same physical state). This is extremely unlikely, as Dennett points out.
Intentional Systems
His theory does not deny the possibility of a correspondence between mental and physical states. Instead he concentrates on the idea that the mind is an intentional system—one whose behaviour can, at least sometimes, be explained and predicted by treating it as though it had beliefs and desires.
If one looks only at external views of the system, it is logical to ask if this model applies to machines as well as to human minds. Consider a computer programmed to play chess. One can examine this system from three perspectives. By taking the design stance, one can predict the game's behavior by knowing the details of the computer and its program. As long as the system behaves as programmed, predictions made from this analysis will be true. This stance is most useful when dealing with simple systems (strike a match and it will light). The physical stance bases predictions on the actual physical state of the system, and then uses the laws of nature to predict what will happen next. This approach is most difficult to apply to a machine as complex as a digital computer.
Chess playing computers are practically inaccessible to prediction from either the design or physical stance. Even their own designers would have a hard time describing these machine's behavior from the design stance. The best strategy for someone playing against such a machine is to treat it as if it followed the rules and goals of chess. One assumes that the computer will both function as designed and that it will "choose" the most optimal move.
This attribution of rationality to the system is the cornerstone of the intentional stance. One predicts behavior in such systems by assuming them to possess certain information and to be directed by certain goals. This ascription of beliefs and desires to machines appears to suggest that machines are capable of "thought."
The aspect of Dennett's argument which I find most appealing is its reluctance to tackle thought on a microscopic scale. As long as he is able to deduce the characteristics of a system from its behavior, he is unlikely to get much criticism from any of us who feel that it is nonsense to suggest that machines are capable of what we, as humans, would call consciousness or thought.
Both The Mind's I and Brainstorms are fascinating books. You should approach them cautiously — they are not light reading. You might decide that the real issue is not whether machines are capable of thought, but just what constitutes thought in the first place.