BACKGROUND for INVENTION

A MOTIVATIONAL SIMULATION OF ARTIFICIAL INTELLIGENCE

Excerpt from Chapter 17 of A Revolution in Family Values: Tradition vs. Technology
Copyright © 2002, 2003 by John E. LaMuth - All Rights Reserved

.................

The concept of artificial intelligence (AI) has been predicted in theory since virtually the dawning of the Computer Age. As its name implies, this term refers to the artificial simulation of language using a computer. The great English logician, A. M. Turing first proposed a standard test for identifying artificial intelligence, based upon his innovative concept of the "imitation game." According to Turing, a computer being tested for "true" AI status would be sequestered in a sealed room, connected to the outside solely by means of a ticker-tape machine (or another "more current" output device). A similarly equipped room containing a human subject would then be pitted against the computer set-up, employing the judgement of an external third party. Through an alternating ticker-tape dialogue with the two rooms, this neutral observer would attempt to distinguish the "artificial intelligence" from the human variety. An ambiguous test result would effectively serve to indicate that artificial intelligence had, indeed, been simulated under such controlled conditions.

No serious contenders for Turing's test have yet come to light, undoubtedly due to the enormous logistics involved in programming a human language into the computer. Although such an ambitious achievement is clearly decades into the future, significant inroads have already been made towards these ends. In fact the Japanese appear to have amassed the greatest lead in this respect, as seen with their ongoing development of the "deductive inference" machine. As its name implies, this innovative form of data processing machine employs "deductive reasoning" to establish original conclusions (from a standard battery of logical premises). The product of years of research by ICOT (Institute of New Generation Computer Technology) the "deductive inference" machine uses information stored in a regional database to draw fresh conclusions not literally contained within the original data.

The major shortcoming to this "deductive" format, however, is the basic restriction limiting conclusions to premises immediately on hand. Indeed, such a deductive inference machine must be closely monitored, whereby carefully remaining within the scope of its regional database. This machine is certainly destined to remain an academic curiosity, scarcely general purpose enough to pass the rigors of Turing's test. Such an "artificial" set-up further experiences difficulties simulating affect and emotion, a fatal flaw in any convincing AI simulation.

.................

INDUCTIVE INFERENCE IN COMPUTER AI DESIGN

Fortunately, an alternate form of rational inquiry proves infinitely better suited for simulating human intelligence on the computer. Traditionally known as "inductive" reasoning, it gathers together the best available evidence, inferring the most probable conclusion from the sum total of facts. A general example of inductive reasoning is seen in the courtroom trial, where various shreds of evidence are systematically presented (whereby finally arriving at a verdict). In contrast to deductive reasoning, however, the conclusions achieved through inductive reasoning are never absolutely certain; in that there always remains the nagging doubt that the verdict was made in error.

In the sphere of artificial intelligence, however, such a drawback actually amounts to a prerequisite (for humans almost invariably make mistakes). Indeed, the uncertainties of the "real world" give inductive reasoning the clear advantage in such a problem-solving mode. Through such an inductive model, each of us builds a "mental model" of reality over a lifetime, forming a master template for all of our current experiences. When our expectations match our surroundings, we achieve a general sense of security. A mismatch, however, leads to a surprised reaction (followed by investigative behavior). Although this sense of security is sometimes ill founded (as in faulty induction), it is actually a small price to pay for maintaining flexibility within a changeable environment.

In the realm of artificial intelligence, the computer would similarly be equipped with its own formal map of reality (employed in an analogous detection and matching mode). Any final conclusions would ultimately rely upon probability, although statistics are one of the computer's computational strong-points. Indeed, it is here that the logistics of the "power hierarchy" rightfully enter the picture, serving as the foundation for the first "inductive system" dealing with motivational logic. According to this modified format, the logical attributes of the power hierarchy are programmed directly into the computer, providing a formal model of motivational behavior, in general. The computer then uses this programming to infer the precise "power level" for given verbal interchange. On the basis of this determination, the computer then calculates the appropriate power countermaneuver: effectively simulating a given sense of motivation.

...................

THE POWER PYRAMID DEFINITIONS

The basic question still remains; namely, what form of data input is most appropriate for such an AI processing system? Although the complete power hierarchy of individual terms proves entirely convincing (on an intuitive level), the AI implications call for an even higher degree of precision than has currently been demonstrated. Indeed, the systematic organization of the "power hierarchy" conveniently allows the construction of what must respectively be termed the "schematic definitions of the power pyramid hierarchy." This crucial innovation spells out (in longhand) the precise location of each virtue or vice within the linguistic matrix, while further preserving the correct orientation of the corresponding authority and follower roles. Indeed, each such definition is formally constructed along the lines of a two-stage sequential format; namely, (A) the formal recognition of the preliminary power maneuver, and (B) the current countermaneuver now being employed (and hence, labeled). Take, for example, the representative power pyramid definition for "justice" reproduced here from the comprehensive collection of definitions (schematically depicted in Tables A-1 to 4).

..................

Previously, I (as your group
authority) have honorably acted
in a guilty fashion towards you:
countering your (as PF) blameful
treatment of me.

But now, you, (as group rep-
resentative) will justly-blame me:
overruling my (as GA) honorable
sense of guilt.

..................

According to this particular "justice" example, the honorable sense of guilt expressed by the group authority represents the preliminary power maneuver, countered by the just-blaming tactic employed by the group representative. Note further how the respective placement of authority and follower roles is effectively preserved (equivalent to their original depiction in Figure 1A. According to this formalized schematic format, the preliminary power perspective represents the "one-down" power maneuver, whereas the immediate power maneuver designates the "one-up" variety. Power leverage is accordingly achieved by rising to the "one-up" power status; namely, ascending to the next higher (metaperspectival) level. Indeed, this cohesive hierarchy of schematic definitions can further be viewed as a "motivational calculus," providing the formal rules of transformation governing how each level meshes with those above (or below) it. In agreement with the principles of "numerical calculus," the integral can be viewed as the "one-up" power maneuver, whereas the differential is seen as the "one-down" variety.

According to Tables A-1 to A-4, the forty-part listing of definitions spans the entire ten-level, hierarchy of the virtuous realm. Indeed, the instinctual terminology of "operant conditioning" initially dominates the preliminary levels, replaced (in due fashion) by the virtues, values, and ideals of the higher levels. At each succeeding level, a new term (distinguished by italics) is introduced into format (representing the current power maneuver under consideration). In fact, beginning with the group level, the preliminary terms begin to drop out of the equation, necessarily freeing-up space for the current terms under consideration (maintaining a stable buffer terms in the definitions).

The respective authority and follower roles, in turn, remain essentially fixed throughout the entire ten-level span, although systematically abbreviated (for sake of brevity) in non-critical positions. In this latter respect, PA stands for the "personal authority," PF represents the "personal follower," etc. Two of the more atypical abbreviations are GR (group representative) and RH (representative member of humanity).

...............

TECHNICAL CONSIDERATIONS

It still remains to be determined, however, the best means towards programming these definitions into the AI format: particularly in light of the current trends involved in computer design. In terms of hardware design, many experts concur that computer development has currently span-ned roughly five stages of technological innova-tion. Vacuum tube technology characterized the first-generation of computer design, giving way to the "transistor" designs of the second-generation. The integrated "computer chip" ushered in the third-generation, refined in the fourth-generation as the Very-Large-Scale-Integrated chip (VLSI). Many experts currently agree that a fifth-generation design component is now underway: characterized by the expanded use of "logic circuits," and the increased use of "parallel processing." Indeed, according to earlier design generations, calculation speed was limited by the Von Neuman bottleneck; namely, programming instructions were painstakingly executed (one step at a time). Parallel processing, however, allows various aspects of a complex problem to be handled simultaneously, whereby eliminating the bottleneck plaguing "sequential processing."
The practical applications of such "parallel processing" are particularly relevant to the AI field of computer design. Indeed, the number of parallel AI processors should ideally equal the sum-total of virtuous terms within the power hierarchy (for a grand total of 40): a modest figure, even by today's design standards. In fact, this processor array would further be structured in a hierarchial fashion, effectively mirroring the "stepwise" organization of the power hierarchy. This "stratified" computer architecture would take full advantage of the strict (transformational) logic of the power hierarchy, eliminating much of the redundancy bound to occur in any convincing language simulation. Indeed, the greatest degree of complexity would involve programming at the most basic (personal) level of the power hierarchy, the remaining higher levels following from this basic foundation.

With all things considered, the most basic unit of input for the AI computer must necessarily be the sentence, for the "power pyramid definitions" are similarly given in the form of a "dual sentence structure." The AI computer would then employ parallel processing to determine the precise degree of correlation between the inputted (target) sentence, and its matching, "power pyramid definition" template. This matching procedure would scrutinize all of the grammatical elements of a given sentence, attempting a statistical correlation with the specifics for a given "power pyramid definition." For instance, the tense of the verb, the plurality (or person) of the noun/pronoun etc., would all be scrutinized according to a pre-set formula. Each processor would then determine the sum-total of correct matches, whereby yielding the relative probability that a given sentence matches a particular "power pyramid definition." The processor yielding the highest (overall) rating would be uniquely singled-out as the best solution-match by the "master control unit."

The master control unit, accordingly, would achieve such a result through the aid of a "feedback loop," the priority of the individual microprocessors reciprocally weighted on the basis of preceding determinations. Indeed, each "power pyramid definition" is composed of both past (as well as a present) design components, establishing context as yet a further consideration in the "matching procedure." In fact, a suitably advanced AI model would retain (in a long-term storage) the content of virtually every relevant interaction within a given context. On this contextual basis, the master control unit would selectively "weight" the individual processors (according to a preset formula), taking full advantage of both past (as well as present) behavior patterns. In this respect, the computer would become exquisitely sensitive to variations in human personality (just as humans are instinctively so), effectively satisfying a further prerequisite of Turing's test.

This "first-generation" AI computer would excel mostly in routine (monitoring) types of applications; namely, security guard, night watchman, babysitter, etc., where a simple "sound the alarm" response would be sufficient. This fairly modest range of duties would further allow response characteristics to be tailored to the particular applications.

For instance, in a screening/interview mode, maximum disclosure would be encouraged, keeping computer responses to an absolute minimum. Here a basic, stock repertoire would be sufficient, featuring brief inquiries; namely, who, what, when, where, why, elaborate further? Indeed, several such elementary programs have already been implemented to date, using key words in conversation to cue stock rejoinders. This basic class of programs, however, is further susceptible to logical and/or contextual blunders, a circumstance surely remedied by more advanced AI versions.

.................

FURTHER POTENTIAL DESIGN INNOVATIONS

Situations requiring a more creative response repertoire would necessarily specify the implementation of a "true" AI simulation mode: aimed at permitting "original" sentence synthesis. Indeed, as any public speaker will testify, it is infinitely more difficult to deliver a speech, than to simply sit and listen to one. This additional design complexity necessarily specifies the addition of a more sophisticated style of response mechanism: the stock repertoire no longer adequate due to its insensitivity to underlying context. The "master control unit" necessarily assumes such a critical function, employing its determination of the current level of communication (presupposition), in order to activate the processor at the next higher level (entailment). This basic determination (along with the particulars of the interaction) is then routed to a separate sentence generator; fully equipped with the formal rules governing grammar, syntax, and phraseology. Since there is usually a broad range of ways to express a given sentence meaning, a large number of potential sentences would necessarily be generated (not all equally suited to the task). Each, however, is, accordingly, slated for subsequent feedback through the detection process, rated for the ability to best express the desired shade of meaning. The extraordinary computational speed predicted for the AI computer, would effectively ensure an adequate response selection (within the relatively leisurely limits governing human response time). Only the sentence with the highest (overall) rating would be selected for subsequent delivery to the speech output unit, resulting in a convincing simulation of motivational language in general.

...........

NAVIGATION LINKS

RETURN to HOMEPAGE


CONTACT INFORMATION