IBRI Research Report #31 (1986)


BEYOND THE SHADOW OF A DOUBT:
Logical Deduction and the Reasoning Process

David C. Bossard
Lebanon, New Hampshire
 

Copyright © 1986, 2000 by David C. Bossard. All rights reserved.
 

ABSTRACT

During the 1960’s some computer scientists thought that it was just a matter of time before scientists understood the reasoning process of the human brain. Today, they are not so sure! This talk reviews some of the results from the field of artificial intelligence, and the reasons why the human reasoning process remains elusive. Some remarks will also be offered concerning the nature of proof as practiced in the Bible and in theology.

EDITOR'S NOTE

Although the author is in agreement with the doctrinal statement of IBRI, it does not follow that all of the viewpoints espoused in this paper represent official positions of IBRI. Since one of the purposes of the IBRI report series is to serve as a preprint forum, it is possible that the author has revised some aspects of this work since it was first written. 

ISBN 0-944788-31-9


PREFACE

In a sense this paper is a personal apologetic because it is an attempt to describe the philosophic basis for operations research, which is a branch of artificial intelligence that I have worked in for nearly twenty years. The fundamental material of OR as it is called, is the “process” or “procedure”, which is formally described toward the end of the paper. In a sense, the whole argument leading up to the definition of process is intended to convince the reader that the concept is inevitable, which of course I believe it is.

A number of assertions are made, supported by recent published research, about how the mind performs its task of rational analysis. Some remarks are also given about the present state of science and where it is heading. All of this is done in a prayerful attempt to explain to the Christian community some important aspects of the scientific world, and hopefully, to impart some understanding and perspective to the events that are rapidly overtaking us, especially in the areas of artificial intelligence.

Regretfully we do not have time to go into the nature of proof as practiced in the Bible, other than to make some inferential remarks. But if you accept the insights which are offered into the mind’s reasoning process, then you can go on, with some effort, into a full development of that theme. Similarly, we briefly and quite inadequately mention the Biblical concept of Wisdom. Either of these subjects is worthy of a full discussion in itself. It seems that work in one area simply leads on to more and more areas of needed work!

I am continually reassured by these investigations of the absolute reasonableness and necessity of the Christian Faith, and that it will stand any honest inquiry by the scientific mind. I pray that Christians will always resist pressures to move away from attacks by the secular world and will press for clarity of thought and thorough understanding of God’s Creation, in the firm conviction that this will reveal handiwork worthy of Our Lord’s Glory.

David C. Bossard

THE ARGUMENT

Beyond the Shadow of a Doubt:
Logical Deduction and the Reasoning Process

“The man will always have the crowd with him who is as sure of himself as he is of the world at large. That is what the crowd likes: it demands categorical statements and not proofs. Proofs disturb and puzzle it. It is simple-minded and only understands simplicity. You must not tell it how or in what way, but simply say yes or no.”
Anatole France, Garden of Epicurus

Modern society harbors two fundamental delusions. The first delusion is that religion is at root an irrational psychological response to the unknown. Don’t misunderstand the tolerance for religion that is generally advocated in Western societies. It is not quite as high-minded as it may seem. It is for the majority of the people, simply a benign way to accommodate an irrational factor of social life. Evangelical Christians reject this notion and affirm that Christianity is a reasonable faith, a religion that will stand up under honest and open-minded rational inquiry. While Christians should welcome religious tolerance, they should at the same time understand its principal motivation.

The second delusion that modern societies harbor is that there is an unbreachable barrier placed between truth and the real world. Pilate’s retort to Jesus, “What is truth?” [Jn. 18:38] is a cynical commentary on the vanity of the search for truth as seen by the educated people of his time. But the message of the Bible is that the quest for truth is a matter with eternal consequences, and that every man’s most important task in life is to know and serve the true God. With this kind of message, how can a Christian then go on to affirm that there is no way for a human to come to the knowledge of truth? Particularly if Christianity is held up to be a reasonable Faith? He cannot. Even arguing for the work of the Holy Spirit in revelation and in drawing men to salvation does not remove the need to affirm that truth is accessible to the reasoning process, and in fact Scripture constantly appeals to reason to demonstrate the truth of what it claims, and argues for the importance of study by each believer.

It is this search for truth that concerns us here. How do we come to know something in our daily lives? What is the activity of the mind that is called the reasoning process? What about that barrier between truth and the real world? Why is it thought to exist, and how can we as Christians answer the accusation of being ignorant if we believe, as we do, that truth is attainable?

The facts are these. The matter of the mind and the stuff the mind works with and the way the mind does its tasks are quite different from the matter and stuff of wisdom as it is viewed by the modern intellectual world, and the way wisdom goes about its business. Why does this difference exist? Is it a virtue or a fault? These are the questions we will address.

We will begin with the mind’s side of the issue. Above all the mind works: it manages quite nicely to handle the reasoning tasks it faces in daily life. This is a virtue that should not be ignored. How does the mind work? How does it come to know facts “beyond the shadow of a doubt”? Perhaps if we can find some answers about this, then they may provide some insight into the more general question of truth.

My particular interest is to approach these questions from the viewpoint of recent research in artificial intelligence (AI). What I hope to do is help you to understand some implications of AI research. and how we as Christians should relate to it.

Since AI by its very name is associated with intelligence, that is, with the way the human mind works, it seems useful to start with some remarks about the reasoning process. Since the whole range of academic pursuit has to do with the reasoning process, we then go on to some remarks about the current state of scholarly research in its treatment of the reasoning process, and where I believe things are heading.

I intend to argue that the current research in AI, and in particular the developments in computer technology that support AI, come at a critical time in history, and that Christians should welcome it with an intelligent understanding of why it is needed, and how it relates to their understanding of the Christian faith.
 

EXAMPLES OF THE MIND AT WORK

Good science begins with common sense, so we will begin with a few thought experiments that will help focus on how the mind works. The experiments have to do with three common activities that the mind engages in every day: first, visual recall, that is recollection of a visual scene after a lapse of time; second, visual recognition, that is, visual identification of familiar objects; and third, speech. Each experiment illustrates some aspect of how the mind works.

First Experiment: Visual Recall

You engage in visual recall every time you try to remember where you put something. In visual recall you reel back the videotape of what you did earlier today, or perhaps yesterday or last year.

The intriguing question is, how does the mind do it? Actually, there are two types of visual recall. One type, expected recall, occurs when the need is anticipated at the time of the event -- for example, locating your car in the parking lot of a large shopping mall. The other type, unexpected recall, occurs when no advance notice is given. Let us focus on unexpected recall.

Suppose I ask some questions about a recent evening trip to the mall. You may prepare your mind for the inquiry by conjuring up some kind of mental image of the parking area, but as yet you have no idea what I am going to ask. In fact, I could ask you a wide range of questions ranging from general to specific, and if you are a typical person, you will be able to answer the questions adequately without much difficulty. How clearly marked was the way into the lot? Were the lights on? How far could you see? Could you see to the other end of the lot? Where did you park? About how many other cars were there? Did you park next to another car? What kind of car was it? Were all of the buildings fully lit as you came in? Who was the first person that you met: describe the circumstances of this meeting?

How does your mind perform this recall? The computer novice would probably assume that somewhere in your brain is a detailed “bitmap” of every visual image you picked up on the way into this meeting, and that you consult this picture to perform the recall. This is nonsense. It cannot be true because the required communications load on the neural network, the physical amount of storage required, and the time to perform memory recall would all be too great. You may be impressed by the number of cells that make up the brain (its on the order of a trillion), but no amount would be enough for indiscriminate literal storage of visual images.

Let’s pause for a moment to think about the communication problem -- transferring information from the eye to the brain or wherever the data is stored for future recall. How does the mind communicate visual information over the neural network? In particular, how much information, in terms of individual “bits” of data, is actually communicated? Recently I did some work for the Johns Hopkins University in which the issue was to estimate the communications capacity needed to conduct large area surveillance by satellite or other means. The answer, for your information, is on the order of millions of bits per second. Communication of a single black and white image to a reasonable level of detail requires something on the order of 10,000 individual pieces of information, and this assumes some type of pre-processing to remove uninteresting data (such as blank areas).

With this kind of background information. I was very interested to see what the corresponding communication rates are for human visual perception. The maximum data rate over the neural system appears to be something on the order of 1000 bits per second. From what I can gather this is an aggregate figure. The optic nerve has on the order of a million individual fibers, but a fiber has a maximum data rate on the order of 25 bits per second, which is a number based on physical and chemical limitations of the cells. Brainwaves, which I assume relate somehow to individual cell reactions, are measured in electroencephalograms at frequencies up to 50 hz. From communications theory this would tend to confirm the data rate of 25 bits per second.

Clearly, these data rates would not support much processing of visual images. Evidently the mind engages in a lot of economizing to reduce the data rate by intelligent processing.

Let me tell you what your mind did when you parked your car at the mall. Be assured that you did not record a mental snapshot of the parking lot. What you did when you parked your car was mentally note a few general facts, particularly differences between your immediate impressions and past experience with similar situations. Depending on your particular interests each of you may have recorded different data. Perhaps one of you is a car buff; you may note some details about the other cars in the area. If you are so inclined, you may note the clothes worn by the first person you met.

A visual impression is about as opposite to a snapshot as you can get. It is like the page of a book loaded with marginal notes. For one thing, you know generally where you are and what you are doing. This provides a powerful context to eliminate all kinds of possible scenes (No, that large shadowy object next to your car is not a moon crater!). You have all experienced brief moments of disorientation when you momentarily lost track of where you are. These moments and your confused reaction to them, only emphasize the critical role of orientation -- grasping where you are -- in visual perception.

When you are asked questions in unexpected recall, you build up images from this limited amount of factual data, filling in the blank areas based on a general familiarity with the scene. Some workers in artificial intelligence refer to these familiar situations as “frames”. The difference between expected and unexpected recall is in the amount and types of actual data that you store away.

This method of handling visual recall is remarkably efficient in that it avoids what would otherwise be a prohibitive amount of unneeded data storage of visual impressions.

If you agree that this is roughly how you perform unexpected visual recall, then you can see a number of things come together.

For one thing, a whole area of interesting research is opened up: what are the familiar scenes that the mind uses to reconstruct images, and how are these stored? Hint: nowhere in your head do you have a bitmap of a Volkswagen Beetle. My guess, which I believe is consistent with some views of cognitive psychology, is that a familiar image is built up of layer upon layer of other familiar images.

For another thing, you can see how it is possible to have faulty recall without realizing it. If you are unobservant, then in the reconstruction you may see things in the scene that aren’t in fact there.

A camera that just records a picture is just a camera. A camera that records a summary of the picture is a smart camera. The science of artificial intelligence as applied to image processing has to do with building smart cameras. The prototype for this is the way the mind does it.

A naive person may be fooled into thinking that a very dumb camera is in fact smart: the easiest thing to do is record all the data. and if that is done. then amazing detail in recall is possible --perhaps far exceeding what the mind can do. After all, if the entire picture is recorded, then you can not only count the cars, but count the bricks in the wall that you pass on your way in. A person could be dazzled by the detail. But that is not the essence of intelligence.

To appreciate true intelligence, recall that the mind is recording the visual impression with the view to economy of effort, and that is needed in order to avoid being overwhelmed by the sheer amount of irrelevant or redundant facts. The correct viewpoint is not recall from a single particular photograph but from a lifetime’s worth of images, continuously received and recorded for rapid recall, frequently with no particular notion of how or whether the data will be used.

The amazing stories of artificial intelligence almost always deal with very restricted particular tasks, not the general tasks that the mind does with unequaled ability.

Second Experiment: Visual Recognition

Vision is the process of seeing. Vision is not quite so simple as aiming a camera and snapping away. In fact, mere seeing is not of much use unless it is able to interpret what it sees. Visual recognition is a fundamental aspect of vision in which what is seen is recognized to be a familiar object.

Visual recognition is such a commonplace activity of the mind that we hardly give it a thought. However it has puzzled philosophers and scientists through the ages. Plato puzzled over how the mind recognizes that an object is a chair when chairs come in so many different forms. You might think however, that in this day of high powered computers, that kind of recognition task would be fairly easy to do. In fact. it is quite hard: in fact no computer today can do so simple a thing as pick a cat out of a scene. A. K. Dewdney described a fictitious device, called the perceptron as follows.
 

“Imagine a black box rather like a camera. At the front is a lens and on one side is a dial with various settings such as ‘Tree.’ ‘House,’ ‘Cat’ and so on. With the dial set to ‘Cat’ we go for a walk and presently encounter a cat Sitting on a neighbor’s porch. When the box is aimed at the cat, a red light goes on. When the box is aimed at anything else. the light remains dark.

“Inside the box is a digital retina sending impulses to a two-layer logical network: an instance of the device called a perceptron. At one time it was hoped that perceptrons would ultimately be capable of real-world recognition tasks like the one described in the fantasy above. But something went wrong.” 

A.K.Dewdney1

What went wrong is that human vision is a very complex operation. It is not merely a matter of receiving and processing light impulses as a camera would do. An intricate and largely un-understood kind intelligent processing goes on simultaneously with the vision.

The fictitious “perceptron” mentioned by Dewdney represents this unknown processor. To see how such a perceptron would work, pick some common visual object, for example the cat. Try to find out how your mind recognizes the object when you see it in your vision. Consider how that recognition works when various things intrude to obscure or confuse the image: it is partly hidden in grass, or behind a tree; it is nightfall and the light is dim; rain or fog obscures the view. How is two dimensional recognition from a photograph or motion picture different from three dimensional recognition? Does movement affect the identification? Also ask how your mind can sometimes be fooled by camouflage or other means. Sometimes it is the failures that give the best indications of how the mind works.

Not only is current computer technology unable to make a perceptron, even tasks that seem much simpler are still beyond reach -- for example recognition of a printed letter, such as “A” in all of the various forms it can take. Doug Hofstadter summarizes the problem of vision as follows:
 

“Each letter of the alphabet comes in literally thousands of different ‘official’ versions (typefaces), not to mention millions, billions, trillions of ‘unofficial’ versions (those handwritten ones that you and I and everyone else produce all the time). There thus arises the obvious question: ‘How are all the a’s like each other?’

“The problem of intelligence, as I see it is to understand the fluid nature of mental categories, to understand the invariant cores of percepts such as your mother’s face, to understand the strangely flexible yet strong boundaries of concepts such as “chair” or the letter “a“ … The central problem of (artificial intelligence) is the question: What is the letter ‘a’ and ‘i’? ...By making these claims, I am suggesting that, for any program to handle letterforms with the flexibility that human beings do, it would have to possess full-scale general intelligence.”

Douglas Hofstadter2

Clearly, although we do not know the details, the mind performs vision by combining a few pieces of visual data with a lot of processing. As the eye looks, the mind is active extracting. evaluating and comparing the visual impressions with familiar objects that it has in its memory. This activity takes place almost instantaneously, so we know that it does not require a very large amount of recall or computation.

Third Experiment: Speech and Communication

Speech, which is the way humans convey information to one another, is perhaps the most profound activity of the human mind. The fact that we express ourselves in discrete words, which you can even point to and count if you like, has deluded many people into thinking that there is something essentially simple about speech and related activities. In fact one of the earliest uses suggested for a computer was automatic translation from one language to another. A recent assessment of that effort states:
 

“[In 1947, some prominent scientists] confidently expected that mechanical translation between natural languages would be among the early important accomplishments of nonnumeric computation. . . . The first approaches to mechanical translation were based on the assumption that word-for-word substitutions from a computer-accessible dictionary would result in useable translations. It was soon apparent that words had so many different meanings, depending on their uses in various contexts, that purely lexical translation could not provide a satisfactory result. Emphasis shifted to study of the syntactic structure … Some improvements of translation resulted, but the vast scope of the problem of understanding the content of natural language texts became increasingly apparent. … By [1960] several of the early workers had become thoroughly disillusioned … By 1970 only a trickle of funds was devoted to mechanical translation.” 
Robert F. Simmons3

Seldom does the meaning of a sentence reside in the words or even the phrases and clauses. In fact our speech abounds in technical inaccuracies and yet we are able to communicate quite well. If you really want to annoy your friends, or appear to be mentally deficient, then insist on literal accuracy in speech. Imagine building a machine that was only 60% accurate and claiming that as a virtue!

Associated with the question of speech is the issue of how the mind communicates with itself. So as a hint of the complexity of the problem, you may want to conduct experiments with your thought processes. Consider how you think through things -- do you do it with the medium of language or in other non-verbal ways? This specific question has been asked a number of times, with prominent scientists coming down on both sides of the issue. One such scientist states:
 

“I insist that words are totally absent from my mind when I really think . . . after reading or hearing a question, every word disappears at the very moment I am beginning to think it over; words do not reappear in my consciousness before I have accomplished or given up the research . . . .“
Jacques Hadamard4

My personal opinion, based on informal queries with friends, is that there apparently are differences among people, not necessarily correlated with intelligence: that some think primarily in non-verbal, others in verbal ways. For our present purposes, this observation tends to support the thought that the information conveyed between individuals by the medium of language may involve an exceedingly complex interplay between words and concepts. Words are simply not used by the mind as separate entities: even phrases and clauses are not the basic blocks. Certainly syntax is not the final residence of meaning.

Almost every sentence that you speak paints a picture, hints at a concept, recalls some similitude. These are the real information conveyers, and until the understanding of language has extended to this dimension, it will not be possible to provide any artificial object with effective communication skills.

Summary of Experiments

To summarize these thought experiments, you can see that the potential, perhaps even the expectation, is present for the mind to be overwhelmed by the data assimilation tasks it faces every day. Certainly without the working example of the mind to prove that it is possible, we would surely conclude that it is not. A cyborg from some alien planet, presented with the above data would laugh at the fictitious yarn being told. The mind does not exist, because what it does is impossible to do.

However, since the human mind does exist, and does in fact cope very well with its environment, it should be possible to find some rational explanation of how it works. The search into this explanation is the basis and motivation for research in Al.

The trick apparently is in the data processing. The algorithms, that is, the logical steps and procedures, that perform this processing are the keys to the mind’s ability to cope. For the most part these algorithms are unknown to us today. It may be that the work needed to obtain a basic understanding will occupy mankind for centuries to come. For practical purposes, the state of progress to date in understanding how the mind works, still has us stuck in the situation of being overwhelmed by the data, which as viewed from the mind’s perspective, is a very primitive state of progress.

The curious thing is that the discussion we have had thus far is not significantly different from the type of discussions you would find in Plato or Aristotle, who lived 2500 years ago. The main difference between those scholars and today’s counterparts is that today we have learned to limit our discussions to things that we know how to handle: relative to a science textbook written today, Aristotle’s Physics would seem to range far off the subject and engage in all sorts of irrelevant speculation: this is simply because Aristotle had not limited his view to the narrow range of inquiry characteristic of modern science.

In the process of this constriction, scientists have sometimes come by the gratuitous opinion that they understand how the world works. When we look at early efforts at research in AI, as well as other areas of science, we see this self-confident assertion at play. However, the facts indicate otherwise, and as you really consider the experiments mentioned here, you come to the realization that we aren’t as far progressed beyond Aristotle as we may have thought.

Modern scholars have made marvelous strides in the handling of objective data, but this is like saying God did a marvelous job in building an eye. An eye is nothing without sight, and sight is useless without perception. The modern world has done almost nothing since the time of Aristole in learning how to assimilate and process that data which it has so finely gathered. It is for this reason that I will assert below that we are about to enter a new scientific renaissance, but this time one that places its main emphasis on processing data, rather than on gathering it!

At the start of this paper is a quote from Anatole France. The quotation refers to people who basically do not want to think. I would like to consider a new twist. When faced with a situation that overwhelms your senses and your ability to think straight: action is needed and there is no time for a careful proof, then the only apparent way to survive may be to hold onto categorical statements rather than seek the truth. I think in a number of ways, this modern age is in that situation. The explosion of information and the proliferation of facts and experts has made sensible management of the facts nearly impossible to do.

There is a kind of saturation threshold in the assimilation of science. Free societies exist because of the free exchange of ideas. But a problem arises in distinguishing good from bad, particularly in the world of narrow expertise. Anatole France’s observation may be true of us even if we don’t wish it to be.

l recall back in the 1960’s men such as John Kemeny of Dartmouth talking about the coming information explosion. He noted the explosive growth of literature, professional journals and the like. and the predictable inability of libraries to cope with the massive flow. No longer can any person be an educated generalist; only increasingly narrow specialists survive. The danger in this trend is the cacophony of apparently conflicting information and the inability to evaluate it properly, which leads to misapplication and the use of irrational jumps of logic. You can see examples of this almost daily in the periodicals and literature -- nuclear winter, the space defense initiative, preserving the snail darter -- problems of alleged global proportions but increasingly specialized and of narrow focus, beyond the ability of the average man to penetrate. This situation can lead to demagoguery, latching onto a person or idea or ideology for basically irrational reasons, as a substitute for rational thought. I believe that we see evidence of this all around us today.

The resolution to this information glut, if there is one, is the development of means for the educated person to evaluate and process the data, specifically of highly advanced tools of artificial intelligence that can do for the educated person what the mind does with its own sensor data -- process it to reduce the overwhelming load. This is the main need of the future for progress in AI, why it must succeed, and why the Christian should welcome its coming.

My purpose in giving you these thought experiments about the mind’s reasoning processes is not to conclude with the assertion that scientists working in artificial intelligence are working way over their heads, or even that no one will ever duplicate intelligence in an man-made device (although that may well be true). Actually I have a much more important agenda, namely to note the relative infancy of current scientific research as compared with the most common activities of the mind, and to show you something of the directions that science must go, what kinds of problems it must handle, if it is to mature in these areas.
 

THE HUMAN MIND AS A PROTOTYPE FOR LEARNING

The root reason for our interest in the human mind is that the way the human mind goes about its business forms a prototype, a working model, of the reasoning process, and therefore is a good thing to study in setting down the standards and methods of formal learning.

This is not to say that the mind is an infallible guide, because we all know only too well that the mind can be very fickle at times. The scholar must be careful in his use of the prototype. On the other hand, the mind has one strong point to commend its ways: it works, and, for the most part, works very well. And it works well in the face of apparently overwhelming and discouraging obstacles.

We will now turn to the question of how scholars have viewed this prototype. This will help us understand why the “barrier” between truth and the real world is assumed to exist.
 

WISDOM AND THE ADEQUACY OF SCHOLARLY ARGUMENT

As scholars develop their methods of argument, they start with a notion of how the mind works and then refine it to get at the essential features of proper thinking, removing the dross and imperfections that they find in the process. What they view as proper methodology after this refining process, the correct way to think, determines how far they can go in the pursuit of wisdom, and how relevant their findings will be to humanity.

Let’s start at the top: What is wisdom, and what is worthy of scholarly investigation? Here is an interesting point: the Biblical view of wisdom is a fairly broad one which includes all aspects of human activity. This view contrasts sharply with the usual view of modern Western society, which derives from more restrictive Classical Greek concept. In the article on the Hebrew word for wisdom (hokma), the Theological Wordbook of the Old Testament states:
 

“The ethical dynamic of Greek philosophy lay in the intellect ... The Hebrew wisdom ... covers the whole gamut of human experience.”

If you read the description of the craftsmen builders who constructed the wilderness tabernacle you nay be puzzled by the King James statement that they were “filled with wisdom” [Exodus 36:1]. Our contemporary notion of wisdom tends to exclude craftsmanship and technological skills, and this is reflected in the New International Version’s rendering of the Hebrew as “skill and ability.” I suppose it isn’t the job of the NIV to change our culture’s use of a word, but if you insist on using your more limited culturally-induced definition of the word ‘wisdom’ everywhere that it appears in Scripture, you will miss much of the point of the Biblical Wisdom literature.

In our modern intellectual world. some of the shibboleths and limitations of its Greek ancestry are still present. We sometimes poke fun at the ancient Greek’s refusal to dirty his hands with practical investigations, but we still retain his concept of what is “real” wisdom as opposed to what is merely “skill and ability.”

Biblical authors would have thought it strange to talk of wisdom that is essentially amoral -- but mathematical reasoning, which is the purest form of logical analysis in the Greek view, is just that: the most wicked person in the world could be a great mathematician. One of the embarrassments of the mathematical world is the fact that some prominent mathematicians and other scientists of the 1930’s became ardent Nazis during World War II.

Greek philosophers on the other hand, would think it strange to put wisdom to the test of practicality. But the personified wisdom of Proverbs 8 is the craftsman at the side of God during the creation of the world. And the glory of the creation is that it is a supremely workable world, one in which all things fit together in a sustaining system: the clouds are set above, the sea has boundaries, the foundations of the earth are marked out. The same theme of harmony and workability is seen in the record of the Lord’s response to Job and his companions beginning in Job chapter 38. All of God’s creation fits together in a harmonious way. This concept of workability, that is utility, is not a strong motivation for Greek philosophy.

Above all, the mind works; scholarly methods do not work in a fundamental way. As the earlier illustration of the camera demonstrates, the scholarly world has been deluded into thinking that excellence in a narrow task is equatable with all of wisdom. In fact, it is overall workability, the ability to support a workable intelligent system in the face of all that as needed to sustain intelligent life, that is the true essence of wisdom.

On the whole, I feel that the scholars have not served mankind very well in adopting excessively restrictive notions in the matter of wisdom. Particularly over the past few decades, the inadequacy of these notions has resulted in all kinds of strange warps and bends in academic circles, even leading to the introduction and tolerance of mysticism and oriental philosophy in the formerly hallowed halls of pure research. In fact, I strongly believe that one of the greatest dangers of the modern age is a general flight from rationality. This may sound incongruous in the age of science and technology, but it is striking to me to note how many highly educated scientists basically believe that the world they see is inherently self-contradictory. Consider the title of a recent book on modern physics: The Dancing Wu Li Masters, and the theme of Doug Hofstadter’s book Gödel, Escher, Bach, which is that the absurd is normal and inconsistency is a way of life. Woven through Hofstadter’s book is a strong flavor of Eastern mysticism, especially Zen Koans (nonsense sayings).

Some of this irrational bent is the result of having the orderly underpinnings of classical science knocked out from under the modern scientist. Ilya Prigogine, a prominent Belgian physicist, remarks,
 

“The quest of classical science is in itself an illustration of a dichotomy that runs throughout the history of Western thought. Only the immutable world of ideas was traditionally recognized as ‘illuminated by the sun of the intelligible,’ to use Plato’s expression. In the same sense. only eternal laws were seen to express scientific rationality. Temporality was looked down upon as an illusion. This is no longer true today. . . . We find ourselves in a world in which reversibility and determinism apply only to limiting, simple cases, while irreversibility and randomness are the rule.”
Ilya Prigogine5

This flight from rationality is an extremely important issue to face. It results from adopting a notion of demonstrable truth that is so narrow that it doesn’t include the most essential activities of life, and therefore is essentially unworkable. Many thinking people have the view that it is impossible to decide what is truth, which is a natural consequence of an overly restrictive rational methodology.

Do we believe in the reasonableness of our faith? Then we must believe that there is a place for effective demonstration of truth. What form must this demonstration take? What rules must it follow?

My thesis for tonight is that the recent findings of science, particularly in the area of research in artificial intelligence, strongly point to the conclusion that the prevailing concepts of provability and the scholarly method are too restrictive; they are not viable in the sense that they cannot provide a useable framework for understanding, or for surviving in the present age. A symptom of this deficiency is seen in the inability of science to describe how the mind performs its ordinary daily functions.

Some people are content to set down rigid rules of how things should be, and leave it go at that, as if the compelling nature of their logic is the only thing that mattered. Thus you have theologians such as Job’s comforters who imagine how God must work, based on some rigid concept of righteous behavior and the meaning of imperfection in the world. You have scientists who laugh at the imperfections of the human eye and say that it must have been made by accident rather than by a Designer. You have logical purists who deny sensory experience as a valid avenue for learning truth. I believe that such people miss an essential point, namely that if a thing is unworkable it is not worth much, even if it is technically excellent. The marvelous thing about the mind’s reasoning process is that it works, and science that doesn’t work will not long continue to attract the mind’s attention.

It is for this reason that I believe that the world is situated today between two major eras of intellectual renaissance and that we are now in a period of intellectual turmoil that represents the transition between these two eras. This transition period may continue for many years. The resolution will include a broadening of the accepted definition of wisdom and therefore of respectable academic pursuit, to coincide more closely to the prototype provided by the mind. The current spilling over into mysticism is an irrational overcompensation for the inadequacies of current accepted academic methodology during this period of turmoil.

To help to understand the current state, we will now look more closely at the origins of the modern academic methodology, and how that methodology has fared over the past 30 years of work in artificial intelligence. Finally, we will conclude with an overall assessment and some suggestions of how the current turmoil will be resolved.
 

ORIGINS OF THE MODERN SCHOLARLY METHODOLOGY

What scholars view as proper methodology determines how far they can go in the pursuit of wisdom. It is interesting to look at the history of philosophy, from ancient times to the present, in terms of this self-limitation. Our particular culture is part of the stream of philosophy that passed through the ancient Greek and Egyptian cultures. We will now briefly pause to summarize this stream.

As far as recorded history reveals, the early Egyptian and Greek philosophers were the first to develop a formal system of deductive logic and then apply the concept to the development of knowledge as a process of logical deduction from first principles. Pythagoras and Euclid developed the procedures and applied them to mathematics and geometry. Socrates and Plato developed the technique of dialog. Somewhat after Plato’s time, Aristotle set down principles of deductive logic which are still followed to this day.

The concept that the logical deductive process has a reality in itself, so to speak, must have involved a tremendous struggle in conception, and been a great breakthrough to the systematic development of knowledge when it was recognized. This concept is the essence of mathematics, which survives to this day as a major intellectual discipline, perhaps the purest form of objective science. In mathematics, if you state the prior assumptions, then any conclusions you claim must follow with no room for subjective opinion. The falsity of a mathematical claim can be proved by giving a single example where it fails to hold -- this is such an iron-clad fact, that “proof by contradiction” is an accepted way of mathematical demonstration.

The pivotal question, from our modern point of view, concerns the source of the first principles, and this proved to be the Achilles heel of ancient science. Aristotle stated the difficulty in his treatise Posterior Analytics, “All instruction given or received by way of argument proceeds from pre-existent knowledge.”6 Where did this pre-existent knowledge come from?

Distrust of the accuracy of the senses and of the “reality” of the outside world led to a view that the first principles should not come from empirical observation. Aristotle states: “Our own doctrine is that not all knowledge is demonstrative: on the contrary, knowledge of the immediate premises [i.e. first principles] is independent of demonstration.”7 He views this as a necessary doctrine in order to avoid the problem of circular argument. Thus Aristotle’s Physics which is a discourse on the nature of mass, motion and similar subjects contains very little reference to actual experimentation, and what is most frustrating to modern scientists, shows a preference to speculate about things rather than simply go and find out! For example, he argues that heavier things fall faster than lighter things.8 As an aside, Galileo in his discourses on physics has a purely logical argument against this notion: If a heavier object falls faster than a lighter object, then bind the two together. The lighter object will retard the heavier. so the combination will fall slower than the heavier object alone; but the combination is even heavier. Contradiction, therefore objects fall at the same rate independent of weight.9

You get a flavor of the profound distrust of empirical observation in the “cave” dialogs recorded by Plato, in which it is observed that there is no fundamental way to be assured that your senses are conveying true information.

Armed with the conviction that there is no way to attain true knowledge from faulty observation, Socrates developed the position that all knowledge is already inborn in the human, and the process of learning is a matter of drawing out this knowledge from the hidden recesses of the mind: hence the Socratic dialogs.

Before we dismiss these viewpoints as foolishness, it should be noted that the distrust of sense perceptions is well-placed. Until ways were slowly developed to get around this very real difficulty -- by developing the concept of the reproducible experiment, for example -- progress in science was unavoidably hindered. If you like you may view the problem as one of quality control. Sure, people like Archimedes developed techniques for dealing with many technical problems, but it was difficult to integrate all of the seemingly isolated pieces of information into a coherent body of science because of the dependence on imprecise sensory perceptions.

As a matter of interest, the ancient division between “science” and “technology” which distinguishes respectable from lower forms of knowledge still persists today, and is I believe a genuine surviving vestige of Greek Philosophy. As we noted above, this division is not found in Biblical literature.

Further, it should be noted that the question of whether the human is born with innate knowledge is still an open one. The evidence from animal life is that indeed there is such a thing as inborn knowledge -- surely no-one teaches a spider how to spin its web! There is firm evidence that an infant is born with many of the mental linkages required for learning already in place. For example, a year ago I read in Science magazine of experiments in which infants could mimic facial expressions only a few hours after birth -- clearly they must have been born with the capability to associate visual images of facial expressions with their own facial expressions. Recent research in language skills suggests that human infants born in different cultures have differing capacities for learning languages, which appear to be correlated to the language requirements of the particular culture. My conjecture is that one major accomplishment of the next few decades will be to identify with considerable accuracy significant categories of human knowledge that are present at birth. Perhaps even the mechanism by which this knowledge is transmitted will be somewhat understood. Certainly our experience with lower life forms such as the spiders, implies that some knowledge is genetically transmitted: it is only a question of how much, what kinds and how.

The methodology developed by Aristotle and his contemporaries held sway until the beginning of the modern era of science. With Galileo, Copernicus, Newton and their contemporaries came the concept of empirical science. The concept evolved that the scientific method consists in logical deduction based on empirical observations. The problem of erroneous sensory perceptions, which stymied the ancient philosophers was addressed by the notion of reproducible experiments. and by refining the concept of measurement and use of precisely crafted measuring devices.

For the most part this concept of knowledge has survived to the present. Science was released from the earlier necessity of divining the precise and logically necessary truth to that of developing models that are adequate to explain the results of reproducible experiments. The precision of logical deduction was carried over from the earlier era, but applied to this enriched base of first principles.

Just as the earlier era discredited the results of faulty sensory perceptions, so the shibboleth in this era has been to disdain anything other than reproducible observations combined with deductive analysis. Measurement and deduction have been the tools of the trade.

Predictably this single-minded focus has occasionally led to some extreme philosophical positions. One such aberration is the reductionist philosophy which argues that complex natural or social phenomena can be explained by reference to a few empirical principles. This viewpoint was popular in the late nineteenth century. The Communist dialectic which dates to this period, has this fundamental flaw in that it attributes the most astoundingly complex consequences of social activity to a few driving principles. The proper commentary on this mindset is given by llya Prigogine. continuing the previous quote given above:
 

“What are the assumptions of classical science from which we believe science has freed itself today? Generally those centering around the basic conviction that at some level the world is simple and is governed by time-reversible fundamental laws.’10

Another extreme view, which argues against any hint of a priori knowledge asserts that the very process of logical and mathematical deduction has no a priori basis in itself, but is only a social agreement among scientists as to what “feels good.” Fortunately in most Western societies these extreme positions are not taken seriously.

It should also be remarked that the insistence on reproducible observations, which is rightfully the foundation for modern science is taken by some to mean that any non-reproducible observation has no validity, perhaps isn’t even “real” in some sense. This type of argument is used by many to justify disregard of miraculous events and instances of God’s intervention in history, even in some cases to deny that they occur because they are not reproducible.
 

EXPERIMENTS IN ARTIFICIAL INTELLIGENCE

The recent interest in artificial intelligence is a natural step in the search for an adequate grasp of the human reasoning process. Until the development of computers, the tools were simply not available to conduct the necessary investigations. These investigations were begun during a time when the classical scientific methodology was accepted almost without question. Therefore the work in AI provides a textbook case of how that methodology has stood up.

We are now some thirty years into the computer age, and one of the interesting facts of this period is the vigor with which inadequate views of the reasoning process have been smashed by the evidence that has evolved in these investigations.

We will mention here only a small amount of the work that has been done: earlier, mention was made of abortive efforts in visual recognition, speech synthesis and machine translation.

Turing’s Experiment.

One of the first people to explore the potential of computer programming was Alan N. Turing, who was a British scientist heavily involved in code-breaking during World War II. In 1950 he wrote a paper entitled “Computer Machinery and Intelligence” in which he predicted that computers were capable of imitating human intelligence perfectly and that indeed they would do so by the year 2000.11 He proposed an experiment in which a person would communicate over a teletype with another person or computer, the object being for the first person to decide whether a human was on the other end of the line. The human would engage in a conversation designed to ask things that presumably only another human could answer properly: the computer on the other end -- if indeed it is a Computer -- has the job of responding in a human-like way, so as to fool the person. The irony in this experiment as it works out, is that the computer has to be careful not to do things a human cannot do, such as do calculations too quickly or answer complex logical questions without any errors.

You may think it a little strange that Turing suggested such an indirect definition of human intelligence. I believe that his definition is consistent with the general approach of a school of psychology which holds that the only way to know about intelligence is to make objective observations of the effects or evidence of intelligence. By this view, if you have something that behaves like a human intelligence in all measurable aspects, then for practical purposes, you have reproduced human intelligence. Turing of course left to the imagination what kinds of questions might be posed in his experiment, or how the human might infer the nature of his talkative companion.

The implication that Turing drew from this thought experiment is that if it is not possible to tell the difference between the reasoning process of a computer and a human, then there is no essential difference.

The Turing experiment has been tried many times and in a sense has formed a focal point for a number of diverse investigations. The experiments have uniformly met with failure; in a sense it is just too easy for a human to detect that his conversant is a computer. Most scientists are extremely doubtful that his experiment can be carried off successfully, and if so, whether such a success really says anything meaningful about the human intelligence.

Learning Machines

A hidden assumption in the Turing experiment is that it is possible to start with an ignorant computer and teach it all the tricks of the human intelligence, so that it can ultimately pretend to be human. Is it possible to build up human intelligence from a “blank slate,” learning all that is needed by trial and error, building a base of knowledge by observing “what works and what don’t”? A current textbook on artificial intelligence has this to say about the subject.
 

“One idea that has fascinated the Western mind is that there is a general-purpose learning mechanism that accounts for almost all of the state of an adult human being. According to this idea, people are born knowing very little, and absorb almost everything by way of this general learner…. This idea is still powerfully attractive. It underlies much of behavioristic psychology, AI students often rediscover it, and propose to dispense with the study of reasoning and problem solving, and instead build a baby and let it just learn these things.

“We believe this idea is dead, killed off by research in AI (and linguistics, and other branches of ‘cognitive science’). What this research has revealed is that for an organism to learn anything, it must already know a lot. Learning begins with organized knowledge, which grows and becomes better organized. Without strong clues to what is to be learned, nothing will get learned.”12
 

Speech Synthesis and Automatic Translators

A second hidden assumption in the Turing test is that a computer can be taught to carry on a natural language dialog. Earlier we remarked on the failures in this area after substantial early interest in the 1950’s and 1960’s. One survey summarized the current state of machine translation as follows:
 

“Perhaps the most influential critic of this work was a linguist, Bar-Hillel, who in 1964 published a critique that pointed out that there was no way to do word sense disambiguation without a deep understanding of what a sentence meant.”13

Knowledge Based Systems

Some authors, noting the general failure of these experiments in artificial intelligence have come to the conclusion that the missing ingredient is an adequate knowledge data base. One author states:
 

“For about 30 years a small community of investigators has been trying, with varying degrees of success, to program computers to be intelligent problem solvers. By the mid-1970’s, after two decades of humblingly slow progress, workers in the new field of artificial intelligence had come to a fundamental conclusion about intelligent behavior in general: it requires a tremendous amount of knowledge, which people often take for granted but which must be spoon-fed to a computer.”14

This observation has led some to advocate the development of huge “knowledge-based systems”. I have ambivalent feelings about this. Some people seem to have the opinion that all we need is computers with vast memory banks of stored knowledge. These people seem to me to fall into the same trap as the behaviorists, except I suppose they would have to agree with Socrates that much of the data base of facts resides in our minds at birth.

While granting the need for a large data base, my personal view is that a more important missing element is an understanding of the mental processes that the mind uses to economize on the amount of factual data required.

Summary

These areas of research are fairly representative of current work in artificial intelligence. What we see is strong evidence that there are fundamental missing components in the current scientific perception, because attempts to duplicate the mind’s activities have not enjoyed much success. One author summarized the current situation as follows.
 

“At present artificial intelligence programs fall in these two groups: those that perform one clearly defined task at human or near human standards, and those that perform more general tasks at drastically subhuman standards.”15

ASSESSMENT

We have now set the necessary groundwork to address the purpose of this paper. We have considered some thought experiments on the way the mind works in everyday tasks. We have looked at how scholars view the pursuit of truth and their concept of logical proof. We have seen how this view has fared in the efforts to explain the human reasoning process in the AI Investigations of the past 30 years, and we noted the sense of urgency to find some rational means to address the information glut that is present today, which has led to some irrational responses even within the ranks of scientists themselves. Having reviewed where things are, let us ask, where are things going, and what should the Christian response be to events as they unfold?

The importance of our finding a response is emphasized by the same author just quoted who stated this importance in the following way:
 

“Many ages in the past have shown great promise while facing great difficulties, yet our age is perhaps unique in that its problems and its promise come from the same source, from the extraordinary achievements of science and technology. Other factors have remained constant. Men and women are no more greedy, violent, compassionate, wise or foolish than before, but they find themselves in command of a technology that greatly enhances their capacity to express these all-too-human qualities. Technology enables them to reshape nature to conform to their needs, as far as possible to make nature over in their image.

“But it is a flawed image. Mankind is indeed both good and evil, and it is expected that human technology will sometimes harm nature rather than improve it. Until recently, however, our technical skills were so feeble in comparison with the natural forces of climate and chemistry that we could not seriously affect our environment for good or ill, except over millennia. High technology promises to give us a new power over nature, both our own nature and that of our planet, so that the very future of civilization now depends upon our intelligent use of high technology.”16

The main thrust of what has been said thus far is this. If we use the mind’s reasoning process as the prototype for academic methodology, then it must be concluded that the methodology at this time is not adequate to describe all of the things that the mind does. It is not a full bag of tricks. Essential pieces are missing, because the current methodology cannot explain how the mind works, and because it is overwhelmed by the information processing requirements that are needed to sustain a workable system.

The recent history of attempts to understand and duplicate the activities of the mind simply underscore this point. When scientists have ventured to reproduce the types of activities of the mind in areas of language, vision and the other topics touched on earlier, brave claims and anticipation have generally yielded to disappointment and retrenchment.

During the discussion of recent experiments. we gave a few hints of where things appear to be going, and it is these areas that I would like to develop now.

In the comments on the history of learning, a remark was made that you may have overlooked in the rush of things, which I believe provides a key for the resolution. It was mentioned that, although the concept of logical deduction from first principles still is an important element in present day science, science has been released from the earlier necessity of divining the precise truth to that of developing models that are adequate to explain the results of reproducible experiments. Modern scientists, unlike the earlier philosophers, do not seek for the perfect a prioris, and absolutely correct deduction. An author of a recent textbook on artificial intelligence states:
 

“At one time, philosophers thought that the reason behind science’s unique history of progress is its ability to prove (or confirm) its theories. As such. science could be viewed as accumulating knowledge, where each block in the structure was secure because it was proven true.

“While this model has a certain plausible ring (what are experiments for, after all?), it is hard for us in the twentieth century to take it seriously. The two great accomplishments of twentieth century physics quantum mechanics and relativity, together overturned one of the great intellectual structures of all times, classical physics. If science proves its theories, then there were a lot of very sloppy physicists who somehow made a mistake when they thought that they had proven classical physics.

“The best articulated response to this dilemma was put forth by Karl Popper, who suggested that the purpose of experiments was not to prove theories, but to disprove them. He noted that while any set of experiments could not prove a theory (any set of data is consistent with an infinite number of possible theories) experiments could disprove a theory. That is, if the numbers do not turn out the way the theory predicts then the theory must be wrong. Thus the reason science progresses is that scientists propose testable theories, and then proceed to run experiments.”17

This ancient self-imposed burden to have everything just so, is largely abandoned, except in limited areas such as mathematics, where precise deduction is essential. Even in mathematics, however, the companion burden to have the a prioris pass some test of absolute verity, which absorbed a tremendous amount of energy on the part of the ancient philosophers, is gone. In fact, one of the great advances in the modern era of mathematics came when it was realized that Euclid’s five axioms of geometry “ain’t necessarily so”, and it doesn’t even matter if they are! Given this emancipating fact. mathematicians gleefully invented various non-Euclidean geometries, Minkowski space, Riemannian geometry and so on. The modern space/time concepts including Einstein’s General Theory of Relativity would not have been possible without these new developments.

The mind uses methods that have not yet been expressed and incorporated into formal intellectual methodology. What are these additional methods? I believe that a major missing component in the methodology is an understanding of the role of what I call Process. C. S. Lewis, in his book Surprised by Joy, which is primarily about his conversion, made a philosophical remark that has stuck with me ever since reading the book. He stated that a person can’t contemplate and enjoy a thing at the same time. As he uses the term, contemplate means to think about a thing, and enjoy means to participate in the thing. Basically C.S. Lewis is saying that it is hard to stand back to see and evaluate the drama as a person is acting out his life: it is later when he can reflect on it that the evaluation occurs.

l think this is a very useful distinction, between contemplation and enjoyment. And I think something of the same distinction fits the subject that we are in today. What the mind does in its reasoning processes is two things, that take place concurrently: it enjoys its task (i.e. it does whatever it is doing). and it also contemplates the task. In the case of the mind. however, the contemplation takes place at a much higher level than merely thinking about what is taking place. It is attempting to model what is going on, to give it a kind of fictional independent existence. This model is used to aid in the reasoning process, and in fact becomes, in most common activities of the mind, almost indistinguishable from logical deduction.

It is this modeling activity, this role of contemplation by the mind that I believe represents the largest missing ingredient in attempts to duplicate what the mind does, and it is the lack of this component that explains the general failure of the attempts mentioned above.

An essential part of the reasoning process is modeling which is the process of extracting the essence out of the data the mind receives and representing the data in some analogous way. The building blocks of models are processes. A process is a way of doing something, a description. When a process is completely described in a consistent way, the resultant description is called an algorithm. Models, processes and algorithms: these are the essential building blocks of human reasoning. Logical deduction, and the components of classical and contemporary intellectual methodology are the energy that powers these components, and what glues them together in a reasonable way.

Actually, the concept of process is not new: however, practical considerations have made it impossible to integrate the process into the scientific method in an adequate way. Progress in science is remarkably dependent on “bending metal,” that is the technical ability needed to conduct experiments. The same is true of process. What has prevented progress in extending the horizons of intellectual advance in the area of process has been the lack of adequate tools for the work. Research in the area has in the past been almost laughably primitive as viewed from the vantage point of the human mental process, and the main reason for this is the lack of tools, specifically computational power.

It may come as a surprise to some of you, but the fact is that the computer age is barely upon us. We are still in the pre-Wright Brothers era of computers, or perhaps at best a month or two after their first flight. All of the hype that you hear about the power of computers, how fast they are, how much capacity, and so on, is empty talk in comparison with the kinds of capacity needed to engage in serious process work.

You see models of process everywhere in the scientific world, but almost universally they are only faint shadows of the things they are trying to model. Economic models attempt to describe the effects of given actions on the economy; environmental models try to capture weather, pollution, and other effects. These investigations of process are almost universally defective and the reason is simply a lack of adequate tools, particularly a deep understanding that would lead to effective algorithms, and a chronic lack of computational power.

One of the occupational hazards of the current age is a direct result of the information explosion. More and more people are forced to engage in “leaps of faith” in assessing the facts that they face. It is virtually impossible to investigate even a minute fraction of the relevant data about a given issue, so there is a strong tendency to shortcut the logic. This same hazard is present in scientific investigations: in the face of a necessity to deal with facts and processes that are beyond the capacity of present day equipment, assumptions are made that are the equivalent to the “leaps of faith.”

There is a strong temptation to seek resolution to problems along the lines of conventional thinking. Earlier, I quoted a remark that the brain works with a vast data base; and this leads to a pursuit of means to deal with vast information data banks. In my view this conclusion is more wrong than right: it is an attempt to explain the functioning of the brain in terms of things we know and can deal with. The correct answer, in my view, is that the brain works with a moderately large data base of facts, supplemented by a comparably large data base of algorithms and processes for dealing with the daily things that it encounters. Real progress in understanding how the reasoning processes work will come when the algorithms and processes are probed. To do this effectively, will require extremely capable tools for computation.

A whole unseen and largely unrecognized world of processes and algorithms exists that the mind uses in its rational processes. Perhaps centuries of hard work lay ahead of us in the attempt to understand these processes. The same motivations that have driven scholarly work in the past, will drive the future to discover this new world, because the prototype of proper thought is the mind, and a correct understanding of how the mind works will be beneficial to the investigation of God’s Creation, and can lead to a better understanding of it, if pursued honestly.

The intellectual world needs to integrate the concept of process into its notions of proof, provability and the reasoning process. It needs to take more literally the observation of Popper quoted above, that we need to be released from the impossible burden of seeking to prove theories, using some narrow definition of proof, and rather go to the development of models. processes and algorithms, which then can be subjected to the easier test of disproof.

The present state of development of process methodology is in comparison to where it should go as the alchemy of the middle ages is to the chemistry of today. This is not intended as a disparaging remark: in fact I suspect that most people today do not understand that chemistry owes a great deal to the work of medieval alchemy. The alchemy of the middle ages, and the development of process methodology both find themselves in the situation of probing new concepts, and attempting to distinguish the limits of what is reasonable. It is inevitable that many attempts will be abortive and may even appear to be poorly conceived upon later reflection.

Many so-called “high tech” developments today grossly underestimate the importance of process, and they wrongly magnify the relative importance of advances in conventional technology. Typically, too much energy goes into building the hardware relative to the energy devoted to making it work. The remarks on the eye made earlier in this paper, illustrate this: whatever the technical effort involved an making an eye, the effort needed to make visual perception is a thousand times greater. The reason that the Internal Revenue Service was embarrassed in 1985 by the failures of its computer system is that too much effort went into specifying the hardware details and not enough into making it work. The IRS experience is not an isolated failure, or an extraordinary convergence of bad luck.

It is fair to ask, if the analysis process is so difficult, what is the basis for an optimistic assessment that the situation will ever improve? Why should it be asserted that a new scientific age will ever come? Perhaps we are doomed to see the end of the age of science, followed by perpetual turmoil. Will process ever develop beyond the alchemy stage?

My basis for optimism is that slow but perceptible progress is visible. For example, there have been significant conceptual advances in computer languages and in the understanding of how process algorithms should be written. One of my favorite computer languages is called “C”, which has become widely used within the past 5 years. The C language is distinctive in that the typical algorithm is very brief -- in fact there are single line programs that can do significant tasks. But brevity is not in itself a virtue -- after all there is another language developed during the 1960’s called “APL”. That language is brief and it is also extremely cryptic. C is not like that: C is easy to read and brief.

C and similar modern high level languages make heavy use of “tree” structures. In essence this means taking a complex idea and breaking it into a series of simpler parts. If you have to treat disease in a tree, it is a lot easier to treat the trunk or a few main branches rather than go directly to each one of the leaves. That is the idea of tree structures: breaking down complex ideas into simple parts.

The secret to analyzing processes is to draw out the simplicity rather than the complexity of the process. But a tremendous amount of effort may be required to do that.

I believe that the information age is bringing in a radical change in the historic viewpoint of the nature of proof and the reasoning process. This change is in the direction of a better appreciation of how the human mind works. I believe that the current intellectual turmoil and apparent flight from rationalism in the academic world is symptomatic of the need for change, and that the change is coming. I believe that this new age of AI brings with it exciting opportunities for the Christian to express his faith, and an understanding of these intellectual forces at work will equip him to counsel and convey the gospel message of salvation through reasoned faith.


REFERENCES

Aristotle, Collected Works, Random House.

Bolter, J. David, Turing’s Man: Western Culture in the Computer Age, University of North Carolina Press, Chapel Hill NC 1984.

Charniak, Eugene and McDermott, Drew, Introduction to Artificial Intelligence, Addison-Wesley. Reading.Mass. 1985.

Dewdney, A. K. in Scientific American, September 1984 issue dedicated to computer software.

Hubert Dreyfus, What Computers Can’t Do: The Limits of Artificial Intelligence, rev. 1979.

France Anatole, Garden of Epicurus

Hadamard, Jacques, The Psychology of Invention in the Mathematical Field, Princeton University Press, 1945.

Hofstadter, Douglas R, Gödel, Escher, Bach: an Eternal Golden Braid, Basic Books, 1979.

Hofstadter, Douglas R, Metamagical Themas, Basic Books, 1985.

Lenat, Douglas B, in Scientific American, September, 1984.

Poincare, Henri, Science and Hypothesis, Dover Publications 1952

Prigogine, Ilya and Stengers, Isabelle, Order Out of Chaos: Man’s new Dialogue with Nature, Bantam Books, NYC. 1984.

Rose, Frank, Into the Heart of the Mind: An American Quest for Artificial Intelligence, Harper&Row, 1984.

Robert F. Simmons, Computations from the English, Prentice Hall, 1984.

Snow, C.P, The Two Cultures and the Scientific Revolution, Cambridge University Press, 1961.

Uttal, William R, The Psychology of Sensory Coding, (1973), The Psychobiology of the Mind, (1978), A Taxonomy of Visual Processes (1981), Lawrence Eilbaum Associates, Hillsdale NJ.

Gary Zukav, The Dancing Wu Li Masters: An Overview of the New Physics, Bantam Books, 1979.


ENDNOTES

1 Dewdney, A. K. in Scientific American, September 1984 issue dedicated to computer software.

2 Hofstadter, Douglas R., Metamagical Themas, Basic Books, 1985, pp633-4.

3 Robert F. Simmons, Computations from the English, Prentice Hall, 1984, p5.

4 Hadamard, Jacques The Psychology of Invention in the Mathematical Field, Princeton University Press, 1945, p73.

5 Prigogine, Ilya and Stengers, Isabelle, Order Out of Chaos: Man’s new Dialogue with Nature, Bantam Books, NYC. 1984.

6 Aristotle, Collected Works, Random House, Physics, ¶ 71a1.

7 Ibid, ¶ 72b18, brackets added.

8 Ibid, ¶ 216a.14-20.

9 This argument is part of the dialog in Galileo, Two New Sciences.

10 Prigogine, op.cit. p7.

11 Bolter, J. David, Turing’s Man: Western Culture in the Computer Age, University of North Carolina Press, Chapel Hill NC 1984, p12.

12 Charniak, Eugene and McDermott, Drew, Introduction to Artificial Intelligence, Addison-Wesley, Reading.Mass. 1985, p609.

13 Ibid, p172.

14 Lenat, Douglas B., in Scientific American, September, 1984, p204.

15 Bolter, op.cit. p200.

16 Ibid. p3.

17 Charniak & McDermott, Op. cit. p. 648.
 
 

Produced for IBRI
PO Box 423
Hatfield, PA 19440


You can contact IBRI by e-mail
Return to the IBRI Home Page

Last updated: January 19, 2002