When I joined IBM in 1963 after realizing that the life of an accountant was not for me, I saw the beginning of the computer era. This took the form of the IBM 1401, the first generation of affordable, stored program computers. When I say “affordable,” I am speaking relatively. You could rent one for something under $2,000 per month. Earlier stored program computers were designed for universities, governments, and big business and cost millions.
The IBM 1401 was second generation, meaning that it was built using transistors. This made it a good deal more rugged than its vacuum tube predecessors and much smaller. You could fit a 1401 into your living room—if you had a good sized living room. For your $2,000 per month, you got a maximum memory size of 16 kilobytes. This is what programmers, like myself, had to work with to persuade the 1401 to complete commercial tasks.
Today’s personal computers are many orders of magnitude more powerful than the 1401, and their role in man’s affairs has become enormous. It is hard to even imagine our work lives today without computers. And most of those in the developed nations have computers in their homes. They are well on the way to being indispensible servants to man.
As computers have evolved, so has speculation that one day computers could become sentient. The first movie I recall with this speculation was 2001: A Space Odyssey (based on a science fiction book by Arthur C. Clarke). In that movie, HAL was the onboard computer accompanying a team of humans on a journey to Jupiter’s moon Europa. The most impressive aspect of HAL’s persona in the movie 2001 was that he was able to respond to oral questions thoughtfully and with a “human” (modulated) voice. Sadly, HAL became unhinged and attempted to kill its human companions.
Leaving aside for the moment the possibility of a mad artificial intelligence, let’s explore the real possibility that our computer friends could become sentient. The 2001 movie came out in 1965, and a lot has happened since.
HAL was able to play games at an advanced level and control his spaceship environment based on sensor inputs. We achieved that level with our computers some time ago. Onboard computers have been integral to humanity’s space missions since the first mission, and a computer (Deep Blue) succeeded in winning a chess match with Garry Kasparov, the world champion, in 1997.
But game playing and computer control systems do not mean sentience. When we use the term sentience, we mean human-like rationality and self-awareness. The most famous test in this area is the Turing test, which Alan Turing first proposed in 1950. The Turing test simply proposes a scenario where a human judge engages in a natural language conversation with one human and one machine (both concealed from the human judge), each of which is trying to appear human. If the judge cannot reliably tell which is which, then the machine is said to pass the test. To make the test fair, Turing stipulated that the conversation should be in computer terms (e.g., keyboarding and display) and not human language. This is proper. He was concerned with the intelligence of the computer, not its speech recognition and voice synthesizing skills.
There is an annual competition for artificial intelligences that includes an award for a machine that passes the Turing test. So far (2009) this prize has not been awarded. In my opinion, however, it will not be long before it is awarded.
Turing projected the year 2000 as the date when a computer would pass his test. He predicted that a main memory of 1 gigabyte would be needed. He didn’t specify processing speed, but had he done so, I am sure it would be a speed that we have surpassed some time ago. Why hasn’t the Turing test been passed?
I can think of three major reasons. One is that all the “thinking” behavior of a computer is governed by its stored program interacting with data.
The execution of a computer program follows a logical path that you can think of as a decision tree. The execution will branch to an X subroutine on encountering condition X, to a Y subroutine on encountering condition Y, and so forth. But what if the program runs into condition ZZ, a condition not specified in the program? The answer is that the computer will come to a mindless stop. In my 1401 programming era, we called this an unspecified halt. In the experience of modern computer use, an unspecified halt would translate into a blue screen experience or a “crash”. The overall point is that a computer will only do what it is told to do by the programmer. Another way of saying this would be to say that a computer cannot out-think its programmer(s).
The second point to be made as to a computer’s ability to fool a judge into thinking it is human is that the human mind is not the linear mind of a computer. The human mind possesses thousands of processors. Okay, you might say, we could do that in a computer design. The thing is that you still would not be emulating the human brain. The human brain basically considers a given scenario and proceeds by “jumping to conclusions”. If a conclusion fails to pass an internal reasonableness test, our brain jumps to another conclusion—until it is reasonably satisfied that it has done its best.
The third point is to do with the way “soft data” is handled by human beings. The fundamental structure of computer logic is binary, ideally suited to dealing with hard facts and making true/false decisions. But abstract human thinking involves a lot of gray areas where the data involved (the knowledge) is only partially understood, and our brain makes judgments and proceeds using common sense. There is no doubt that, eventually, our computers will have more hard data at their disposal than we do, but what about common sense and the ability to think past knowledge gaps?
So am I saying that it cannot be done? No, I am just saying that it is not easy. I think it will always be valid to say that a computer’s intelligence cannot exceed the sum of its designers (including the designers of both the hardware and software). But if we have a team of designers and layers and layers of software using advanced search and logical algorithms and we have enormous volumes of stored data with classifying tags to refer to, I think there is no question that we can create an artificial intelligence that will be very impressive. For instance, the jumping to conclusions thinking style of man has already been emulated by computer researchers in the field of hierarchical temporal memory. I think it is just a matter of time before artificial intelligence will exceed any individual human being’s intelligence in designated domains.
I emphasize “in designated domains” because I cannot, at the moment, imagine a computer equaling a Picasso or Mozart in artistic creativity. In fact, to generalize, I think I can say with confidence that computers will always score very low in any test of spirituality.
What do I mean by “spirituality”? I like the dictionary (per Wikipedia) version which goes, “Spiritual matters are those involving humankind’s ultimate nature and meaning, not only as material biological organisms, but as beings with a unique relationship to that which is beyond both time and material existence.” By this definition, one includes the creative arts (music, poetry, drama, art, etc) and the intellectual efforts of great thinkers seeking to understand man’s place in the universe.
Of course, if we were to confront the computer of our future—HAL Future, a computer that has passed a comprehensive test for its intelligence—with its poor performance in any test of its spirituality, it might easily reply, “So what? Who needs it?”
Now, I think that HAL Future’s “Who needs it?” question is a very important one because I believe that spirituality is going to be the substantive difference between humanity’s intelligence and artificial intelligence. And I think this is an enormously important difference.
The title to this post is “Our Computers – Successor to Man?” The reason for the title is that we should consider the possibility that man fails in his attempts to make the transition to future man (or doesn’t even make the attempt). What then? Well, I think the best candidate (on Earth) to replace man as the intelligence-provider to achieve God’s purpose would be our computers. (I am thinking out in time a hundred years or so.) How ironic that would be if man’s machine replaces man.
A computers-in-charge future is not so far-fetched when you think about it. Computers have none of our personality warts such as aggressiveness, egocentricity, personal ambition, irrationality, and quick temper. They would have no problem with a depleted ozone layer, global warming, or an atmosphere with reduced oxygen. Space travel would be a snap for them. No need for the extra weight of life-support systems. No worries about multiyear journeys between stars. Maybe God would be better off depending on our computers of the future?
I think not. I believe that our spirituality is critically important to our success at achieving our mission.
As a matter of fact, I think we will only succeed if we develop our spirituality. This is our secret ingredient, our magic wand—or whatever metaphor you might choose.
Of one thing we can be certain. One way or another, computers will be a big part of the future. We must hope that man succeeds in becoming future man. Then our computers will be our servants.