Site hosted by Angelfire.com: Build your free website today!

My approach to AI

A serious debate on the nature of Intelligence is far from easy. It has to address some truly profound issues to make important contributions to the topic. Some very interesting ideas have been put forth so far but with no real consensus yet. The challenge posed is primarily due to the diverse range of activities human mind can be applied to, each with the claim to being intelligent. Thus the very difference in the nature of them also forces different interpretations of intelligence. This being the case, the nature of an artificial intelligence is also highly contentious. In this argument, I am going to take a radical stand which is dictated by my conviction that a farsighted approach is essential, mainly because of the potentially pervasive and far-reaching consequences of the field.

If I were asked to frame one and only one golden guideline to the creation of a new technology, it would be: “Would we at any time in the future, regret inventing it?” It may seem that an immeasurable foresight is needed to answer this question. Actually this question assumes critical importance in the case of AI. As I would show there is a way out of this problem. I will first make the argument in the context of AI which can be extended to any other case.

My argument starts with the strong emphasis that intelligence needs to be viewed in a way that would give the same answer regarding its nature even when viewed from widely different perspectives. Such a holistic approach would pave the way for the emergence of concord on the issue.

My argument starts with four major abstractions which would be called “RealQualities”, “IdealQualities”, “RealMind”, “IdealMind”. RealQualities is the embodiment of the nature of a typical human being. It implies that he is subject to emotions such as anger, fear, hate, jealousy, pride, love etc. IdealQualities is the total absence of such emotions in any situation but under the same capabilities of the mind. RealQualities and Ideal Qualities together make up the RealMind whereas IdealMind is made up of only IdealQualities. The ensuing discussion should be understood strictly keeping the above definitions in view.

The real difference among these four major abstractions is in the extent of rationality that goes with them and in the perspective of the issue they represent. The IdealMind signifies a perfectly rational mind that is free of contradictions and holds totally consistent views. In other words its reasoning is always sound. IdealQualities being defined to be free from emotional influences makes it, as I argue the preferred one for a mind, over RealQualities, to make sound reasoning. As mentioned earlier it makes up the IdealMind.

There is definitely a need for elucidation on the nature of IdealQualities. It leads to the type of reasoning a mind would make when it is not impacted by emotions it generates. A simple argument would show that it is indeed the ideal. It needs no convincing to understand that it is to our advantage to let ourselves free of undesirable control. In an emotional state we undeniably lose our control which is central to reasoning. The loss of control may last only a few moments or linger. In either case as long as we are in its grip it impairs our reasoning.

This impairment in reasoning is temporary. But to me this also explains why people differ in their logical ability. By simple extension of this idea, we can see that people differ in logical ability because reasoning impairment lasts for a lifetime. In other words their prevailing personality which also responsible for the emotional make up keeps impacting their reasoning. To clearly understand how this manifests as cognitive deficiencies we need to conclude that as the emotional states repeat it begins to crystallize maybe over generations bringing in an hereditary angle to this phenomenon. So there may be correspondence between certain emotional states and a particular type of cognitive error. Even obstruction of high level cognitive process may result which manifests as low intelligence.

Thus freeing ourselves from the emotional grip is certainly advantageous. Though sometimes certain emotions may have a survival advantage, from the perspective of sound reasoning, an emotional state is definitely a clear impediment. Thus IdealQualities is ideal for a mind and represents this ideal.

I will make my point clear now. An entity that claims itself to be truly intelligent should possess an IdealMind. Since it represents a state which is free of contradictions it is ideal for making far-sighted decisions. That should be the goal of Artificial intelligence. The decisions such an entity makes should be led by IdealQualities. Thus the task is to figure out how to gauge the emotional make-up of our decisions that is reflected in the choice that we make. The one that is least net-negative would represent the most far-sighted choice.

As a part of the argument, I will present a toy example:

Situation: Person A enjoys a respectable position in his company but he aspires for the top job. One day he hears, however that one of his colleagues beat him in the race for the top job.

We present two potential responses by A to this situation (i) He swears to chalk out a plan for discrediting his colleague and ultimately have him removed from the top job. (ii) He resolves to begin a serious introspection to understand what he lacked that led to the failure in getting to the top position and address the causes of failure.

Analysis: One possible emotion reflected in choice (i) is jealousy which is a negative one. The emotion reflected in (ii) is non-negative. Choice (ii) is a far-sighted one.

Though the above is a simplistic scenario, it clearly indicates the basis on which the foundation of AI should be laid if it is to be of enduring benefit to mankind. The technical challenge of building an actual system is a different issue though.

A holistic view of Sorting Algorithms

A holistic way to evaluate the sorting algorithms is on the basis of number of partitions that is made in each pass. In the case of algorithms such as the bubble sort, selection sort and other quadratic complexity algorithms, the number of partitions made in each pass is zero i.e., all the unsorted elements are considered as a unit in each pass. When the number of partitions is one or more, recursion is used. Partition helps because it obviates the need for comparing each element with those elements not in its partition. In the case of binary partitioning, log(2)n comparisons are made for each element in the average case, the total complexity therefore being nlog(2)n.

One can extend this concept by making more than one partition. For example, the radix sort makes use of this, though the limiting of the number of comparisons is approached differently. Division of a partition is done not on the basis of whole value proximity but on the basis of “value and position of digit” proximity. In the typical value proximity algorithms the elements in a partition are stored together based on their values. So these sorting algorithms can restrict comparisons in a particular pass to respective partitions of the element being compared, however, in radix sort, children partitions are not based on physical or value proximity, but on the position of the digit in the data which is in consideration in a particular pass and the value of the digit in each element in that position. Though during each pass the whole list is scanned, the comparison in each pass is the minimal possible for a particular element because of the basis used for partition .Thus conceptually, we can think that there are n partitions in each pass with the number of passes being the number of digits. Partition as the reader can see, is used in a broader sense, its extent depending upon the extent of isolation of an element in each pass from the other elements. This may be on the basis of value or on any other atypical basis as in the radix sort. We can see that as the number of partition increases the performance of a sorting algorithm improves.

We use this intuition to suggest two steps to design a generalized sorting algorithm whose performance varies according to the choices based on this design. We assume that at least one partition is made in each pass.

1. Select the basis of partitioning. It could be on the basis of value, position or any other useful aspect in isolation or in combination. Also a part or all of an element may be involved during each pass. For example radix sort typically uses both value and position and a digit at a time.

2. Decide on how the number of partitions would be made. The obvious and universally used way is fixed-partitioning where the number of partitions in each pass is the same. A useful variant is variable-partitioning where the number of partition varies, say according the number of elements in a partition

A sorting algorithm is typically tuned towards one or more of the following:

1. Efficiency in Time 2. Efficiency in Memory 3. Satisfaction of the above in special cases

We will discuss the some set of choices further, with respect to the first requirement as it is generally of major concern. Choosing the right basis for partition and the number of partitions decides the number of passes that will be made and also the number of comparisons in a pass. An approximate formula for calculating the number of passes that will be made in the worst case, is given by, P=log num R. where R is the range of the values, num is the number of partitions in each pass and P is the number of passes. We assume that the all partitions in a pass have the same range. When partitions are done this way even quicksort will have a worst case that is O(n.log R). We can prove that the number of passes is always logarithmic even in the worst case, though a proof is beyond the scope of this discussion. Note that the worst case does not apply to the radix sort as it is not affected by skew because its basis of partitioning. The number of passes is given by , P=(log num n) (d) where n is the number of elements and d is the maximum number of digits. In radix sort num and n are equal.

Some Choices and their worst case number of passes:

1. Value, whole element and 2 partitions always. Example – A slight variant of Quicksort-O(log(2)R)

2. Value, Position, part of an element and n partitions always. Example – Typical Radix Sort-O(d)

3. Value, whole element and ?n partitions always – A novel combination-O(log(2)(log(2)R))

4. Value, Position, half element always and n partitions always – A variant of LSD Radix Sort –O(1)

We conclude our discussion by saying that we can create a single sorting implementation with the earlier mentioned design parameters so that we can simulate any of the common sorting algorithms developed so far and also accommodate future algorithms if any.

High dimensional indexing

Consider the following data

F.Name L.Name Father's name Mother’s name

 abc      dac     adc             dab              
 dbc      aac     abc             aad 
 abc      dac     bbd             ddb 

This is a four dimensional data with three rows. Our objective is to be able to search this type of this on any combination of attributes with the same efficiency. We can do this by maintaining indices on all the dimensions. But this will multiply the data by the number of dimensions. One approach to this problem is to make use of repetitions in the data to condense the information. We do this by expanding each dimension of the data. For example, “First Name” is expanded to three dimensions and the whole data now looks as follows:

FN1 FN2 FN3 LN1 LN2 LN3 FATH1 FATH2 FATH3 MO1 MO2 MO3
a    b   c   d   a   c    a     d     c    d   a   b
d    b   c   a   a   c    a     b     c    a   a   d
a    b   c   d   a   c    b     b     d    d   d   b

Now the dimensions are more and the data is also more repetitious.

When the value of the first three dimensions are as follows , we have

FN1 FN2 FN3 LN1 LN3 FATH1 FATH3 MO1 MO3
  a  b   c   d   c   a,b   c,d   d   b
FN1      FN2   FN3      LN2      FATH2         MO2
  a       b    c         a        d,b          a,d

Thus instead of 18 characters being stored for the index FN1, FN2, F N3 for the values a,b,c we see only 14 characters are needed after removing repetitions. Similarly other indices may be maintained. Since actual data contains millions or billions of rows, considerable repetition will be manifested.

The full possibility has not been explored but it very much seems that creating repetitious data by dimension expansion holds promise.