Site hosted by Angelfire.com: Build your free website today!

MACHINE INTELLIGENCE

1. SUPERCOMPUTERS

All computers, from the largest mainframe to the smallest microprocessor, essentially work in the same way. Their main differences lie in the speed of operation, memory size, range and type of associated peripheral equipments, software and applications.

In the late seventies, with computers becoming ever more important for intensive "number-crunching" operations in certain highly-specialised fields, the so-called "supercomputer" arose. The only one I came into contact with was the "Cray" (there may have been others).

This was at least an order more powerful than other mainframes.

Its very high speed logic chips were densely packed, generating so much heat that Freon cooling (as in a refrigerator) was required. Interconnecting wiring was kept very short, since at these speeds of working even the time taken for an electrical signal to pass along a length of wire was significant (it takes about one nanosecond - one thousand millionth of a second - per foot). It was an "array processor" carrying out not one but sixty-four calculations at the same time. Its electrical power requirements would have sufficed for some 160 electric fires! It cost (back in 1980) about two million pounds.

The computing power of this machine was incredible, but perhaps in a way that only advanced engineer-mathematicians with nearly-impossible and highly complicated modelling problems could appreciate. It was indeed a supercomputer, but by no means can it be regarded as a "supermind".

Developments in microchip technology and very large scale integrated circuitry have now led to a new type of array processor (that is, one that can carry out a number of different calculations simultaneously). It is possible to fit the entire circuitry of a complete computer on to one inexpensive chip, smaller than a fingernail, that runs relatively cool. One machine could house a very large number of these chips - perhaps many thousands - suitably interconnected so as to form computer hardware with a quite fantastic capability.

As I have said, it is the program software that provides the "intelligence", and fully to exploit such formidable hardware would require a great deal of very complex software. Of course, much of this could, in effect, be written by computer.

Such an equipment could well be termed a supercomputer. It could also represent one of several possible routes to the "supermind".

2. ARTIFICIAL INTELLIGENCE AND EXPERT SYSTEMS

For many years now, useful computing applications have been arising in the generic area of "Artificial Intelligence" (AI). This term in itself is misleading, since impressive as some of these systems are in operation they are not truly "intelligent", relying as they do on a detailed knowledge database and a set of (often very complicated) "rules". They are better termed "Knowledge-Based" or "Expert" systems. As always, the seeming intelligence resides in the program software - not the hardware - and they can be mounted on orthodox micro-, mini- or mainframe computers.

Their application lies in the diagnostic and problem-solving fields, in relatively narrow domains. Typically, they are based on a software "shell" which comprises an "inference engine" and a framework on which can be mounted the appropriate rules. These rules arise from an iterative interaction between a "knowledge engineer" and one or more "experts" in the particular field.

When completed, these systems can often outperform the expert on his best day and with ample time at his disposal! They pose a series of questions, each depending on the previous answer, and arrive at a conclusion with a stated figure of confidence (expressed as a percentage). The system can explain how it arrived at its findings, and what difference one or more changed answers to its questions would have made.

There are many situations where these systems can be of great value, such as in preliminary medical diagnostic screening where the expert is not available. Even if the expert, perhaps hard pressed for time, were present the system would still provide useful prompting for the less-obvious questions which might otherwise be overlooked.

The more advanced "shells" are very "processor-hungry" and can access very large databases. Developments in computing technology now provide adequate yet inexpensive resources well-suited to such demands. The cost of creating a really elaborate system would reside mainly in the many skilled man-months required for the programming. This is probably the only serious obstacle in implementing very large-scale expert systems which could undertake major responsibilities in the running and ordering of human affairs.

However, no such system could be regarded as a "mind", or even as a "brain". We would still have the "wise fool", but maybe quite wise and not so much of a fool!

3. DEVELOPMENT OF A BRAIN

The brain of a newborn baby will already have some neural pathways in place, but to a large extent it can be regarded as a tabula rasa - a blank sheet of paper yet to be written on. All the sensory inputs - visual, auditory, tactile, olfactory, spatial disposition - provide an enormous and confusing mass of data. Their interpretation and interrelationships have to be learned, and physical movements co-ordinated. All this represents a formidable body of knowledge, and in fact by the age of four we have already completed half the learning that we will ever accomplish throughout our entire lives.

A person born blind, and whose sight is restored later in life, faces curious problems. The visual data initially is quite meaningless. An orange, for example, would not be recognised by sight but only by its feel and smell. Even the difference between a cube and a sphere would not be apparent until the hands felt the corners of one and the smooth roundness of the other. A difficult and lengthy learning process is required, and in some cases full confidence in the newly-acquired sense will never completely be attained.

Thus the development of the brain is dependent on a rich and detailed input of data from the outside world, and a great deal of trial-and-error verification of these data.

Assume, for the sake of argument, that a machine intelligence could be constructed with a data-handling and storage capacity equivalent to the human cerebral matrix. It would necessarily not employ the same architecture as a natural brain but analogues to all the brain functions - such as semantic and gestalt recognition, association and retrieval, heuristic (learning) capability - could be built in. This would be a complex task - but not impossible.

A further task would remain: the devising of an adequate interface with the outside world. For this there are many possibilities, including data sources not normally regarded as sensory channels.

A vast reservoir of stored data - perhaps amounting to the accumulated knowledge of mankind - could be put in place in a readily accessible, searchable and retrievable form.

I have no doubt that such a project is theoretically feasible, but it would be a colossal undertaking equivalent to the moon project and absorbing enormous amounts of highly-qualified manpower.

It would represent the highest achievement of the human race, but what return would we get for such a great expenditure? Perhaps the salvation of the race itself.

In practice, such a development is most unlikely to take place in one step, but there are several intermediate stages that could yield immediate benefits for commensurate costs.

Back to THE IDIOT SAVANT

Forward to THE HOME PC EXPLOSION

Return to HOME PAGE