This paper was written Aug. 23, 1999

**IMPORTANT NOTE (Mar. 6, 2006):** I have encountered a valid counterexample which invalidates the original claim that the constants mentioned in the definition can be zero too. Thus, I have whittled down the scope of this paper to account for it, in lieu of a re-write, for now. The allowance of zero as one of the constants makes one of the answer variables disappear and thus collapses the infinite regress which forms when all of the answer variable are present in each line of the equation - one on the left-hand side, all others on the right-hand side.

The counterexample I encountered was in fact a matrix whose solutions were all 1s, which could certainly be seen as being in broad agreement with the paper as it originally stood, but the original version would have implied that the solution to this particular matrix was all zeroes. The matrix in question was: X = -3Y + 8 - 4Z; Y = 5 - 2X - 2Z; Z = 3.5 [+0Y] - 2.5 X. The solution to this matrix is X = Y = Z = 1. According to section 6(g), though, it should be X = Y = Z = 0, which is not a valid solution.

Given this counterexample, I must reduce the scope of the original claim and withhold any implications for econometrics for now. A re-join of this paper to the article which formed its original context must await a complete re-write. I found that counterexample due to a question asked at the Yahoo! group Mathematics.

**REQUIRED TO PROVE:** That the only finite solutions for a self-referential matrix defined below are four trivial or degenerate cases: all variables equaling zero {(0,…,0)}; all variables equaling one {(1,…,1)}; both of the first two cases {(0,…,0),(1,…,1)}; or the empty set {}.

**0. Definition**

A "self-referential matrix" is a square matrix of x variables and equations where each variable appears once as the answer to each equation, hence the number of variables and equations are equal, and these variables are the only ones used to define each equation. There are no independent variables. The only constants permitted are real ones that are either added to the equations or serve as scalar multiples for one or more variables, to a maximum of x(x-1) scalars, and are represented by "k," and real constants that are the powers used for exponentiation. None of these constants can equal zero. The variables on the right-hand side can be combined using addition, subtraction, multiplication or division, and each, as indicated earlier, can be raised to a real power. The variable x is a positive integer greater than two.

**1. The simplest: additive self-referential matrices**

This matrix is a set of equations like this:

n

_{1}= n_{2}+…+ n_{x}n

_{2}= n_{1}+…+ n_{x}[etc. - you can see the pattern]n

_{x}= n_{1}+…+ n_{x-1}

- The simplest case, comprising the first part of the stepladder proof, is
n

_{1}= n_{2}+ n_{3}n

_{2}= n_{1}+ n_{3}n

_{3}= n_{1}+ n_{2}Now, if you take a look at it, you'll see that the only non-infinite values that fit are all zeroes. I could cite the fact that zero is the additive identity; the fact that each variable that's the solution to a function appears in another function definition; that there are more than one of each for each equation, and that the only value that can make this work is the identity because it doesn't change the original variable and so can be applied x times and leave the answer the same; but I'll appeal to intuition - fortified by the realization that trying to solve such a system by conventional algebraic means leads to an infinite regress, and the only way to avoid it is to use the additive identity because you're always stuck in the same place no matter how many times you iterate. (I'll go into more detail when discussing mixed cases in

**6a**because this realization for the more general three-variable set is less obvious intuitively.) - The second part is to assume it's true for a set of x variables and see if it holds up for (x+1) variables:
First of all, assume that the x-variable system

n

_{1}= n_{2}+…+ n_{x}= F_{1}n

_{2}= n_{1}+…+ n_{x}= F_{2}[etc.]n

_{x}= n_{1}+…+ n_{x-1}= F_{n}Next, add variable n

_{x+1}.n

_{1}= n_{2}+…+ n_{x+1}n

_{2}= n_{1}+…+ n_{x+1}[etc.]n

_{x}= n_{1}+…+ n_{x-1}n

_{x+1}= n_{1}+…+ n_{x-1}+ n_{x}n

_{1}= F_{1}+ n_{x+1}n

_{2}= F_{2}+ n_{x+1}[etc.]n

_{x}= F_{x}+ n_{x+1}n

_{x+1}= n_{1}+…+ n_{x-1}+ n_{x}, which = F_{x}+ n_{x}Since we assumed that the only value that works for the x-variable system is all zeroes, let's put those in:

0 = 0 + n

_{x+1}0 = 0 + n

_{x+1}[etc.]0 = 0 + n

_{x+1}n

_{x+1}= 0 + 0This shows clearly that if you add one variable to a system of x, the only value for the new variable that's consistent is zero. So, for any value of x, the only solution for an additive self-referential matrix is all zeroes.

**2. Subtractive self-referential matrices**

This is easy: since subtracting a number is the same as adding its negative, a negative real number is a real, and the proof covered all reals, it applies to subtraction too. So, the proof in **1** covers the subtractive case too. It also covers any combination of adding and subtracting the individual n's: simply change "- n_{j}" to "+ (-n_{j})" You'll see the same system as in **1**.

**3. Multiplicative self-referential matrices**

This type is a set of equations like this (remember, x is a positive integer greater than 2):

n

_{1}= n_{2}*…* n_{x}n

_{2}= n_{1}*…* n_{x}[etc. - you can see the pattern]n

_{x}= n_{1}*…* n_{x-1}

- The simplest, corresponding to the first case, is
n

_{1}= n_{2}n_{3}n

_{2}= n_{1}n_{3}n

_{3}= n_{1}n_{2}Now, if you take a look at it, you'll see that the only non-infinite values that fit are all zeroes or all ones. (Once again, zero and one are the multiplicative identities, and as such leave the answer unchanged if applied any number of times and thus neutralize the infinite-regression snarlup.)

This is the first step for the stepladder proof.

- The second is to assume it's true for a set of x variables and see if it holds up for (x+1) variables:
First of all, assume that the x-variable system

n

_{1}= n_{2}*...*n_{x}= F_{1}n

_{2}= n_{1}*…* n_{x}= F_{2}[etc.]n

_{x}= n_{1}*…* n_{x-1}= F_{n}Next, add variable n

_{x+1}.n

_{1}= n_{2}*…* n_{x+1}n

_{2}= n_{1}*…* n_{x+1}[etc.]n

_{x}= n_{1}*…* n_{x+1}n

_{x+1}= n_{1}*…* n_{x-1}* n_{x}n

_{1}= F_{1}* n_{x+1}n

_{2}= F_{2}* n_{x+1}[etc.]n

_{x}= F_{x}* n_{x+1}n

_{x+1}= n_{1}*…* n_{x-1}* n_{x}, which = F_{x}* n_{x}Now, let's put in the assumption and try zeroes and ones for n

_{1}to n_{x}:- First of all, ones:
1 = 1 * n

_{x+1}1 = 1 * n

_{x+1}[etc.]1 = 1 * n

_{x+1}n

_{x+1}= 1 * 1_{x+1}= 1 too. - Secondly, zeroes:
0 = 0 * n

_{x+1}0 = 0 * n

_{x+1}[etc.]0 = 0 * n

_{x+1}n

_{x+1}= 0 * 0_{x+1}= 0 too.

This shows clearly that if you add one variable to a system of x, the only value for the new variable that's consistent is zero if the others are zeroes and one if the others are ones. So, for any value of x, the only solution for an additive self-referential matrix is all zeroes or all ones.

- First of all, ones:

**4. A multiplicative matrix with exponents**

This is quite easy, since a^{b} is (a * a *…*a), b times. So, if a has to be zero or one, then a^{b} has to be zero or one because only 1^{b} = 1 and 0^{b} = 0 - with the exception of 0^{0}, but this is well-known already. Note that b in this case is real, so this apples to square roots too. (Note that this depends upon the proof of **2** above.)
For example, look at this (the m's and b's are all real numbers):

This system is clearly equal to the multiplicative one inm

_{1}^{b1}= m_{2}^{b2}m_{3}^{b3}m

_{2}^{b2}= m_{1}^{b1}m_{3}^{b3}m

_{3}^{b3}= m_{1}^{b1}m_{2}^{b2}

This works because the b's, m's and n's are all reals. (A real number to the power of a real number is a real, as long as they aren't infinities, because a real can be any non-infinite quantity.)n

_{1}= m_{1}^{b1}n

_{2}= m_{2}^{b2}n

_{3}= m_{3}^{b3}

More generally, just set

where i is any positive integer, for all equations in an exponential multiplicative matrix. Then,n

_{i}= m_{i}^{bi}

Note, though, that if any exponent is less than zero, and the corresponding base variable is multiplied to another variable, this is equivalent to dividing by that variable raised to the power equal to the absolute value of the original: for example, a * b^{-2} = a / b^{2}. Since division by zero is impossible, the only possible solution for such an equation is all ones.

**5. Divisive matrices, or any combination of multiplying and dividing such as [ab/c]**

This is easy too: simply note that dividing one number by another is the same as multiplying one number by the other raised to the power of negative one: [a/b] = a * b^{-1}. Once you see that, the reasoning in **4** covers it.

Really, **5** is only a matrix from **3**,

with at least one of the *'s changed to a division sign. Everything else has to be the same, but this applies to any combination of multiplying and dividing. Note, though, that because division by zero is impossible, a divisive matrix has to have all ones as the solution.n

_{1}= n_{2}*…* n_{x}n

_{2}= n_{1}*…* n_{x}[etc. - you can see the pattern]n

_{x}= n_{1}*…* n_{x-1}.

**6. The general case, where all operators are permitted**

Once again, I'll be using the stepladder procedure, but the logic is a little more complex this time.

6a. The three-equation caseFor any three-equation system with any of the four operators in place:

it is important to remember that it has been shown above, though intuitively, that every variable is the sum, difference, product or quotient of the other two in the above matrix. It has been shown that for the "pure" ones containing the same operator for all three, that the answer is the identity value appropriate for that operator: 0, 0 or 1, 1. Simple substitution of one variable for another in this system, or any other self-referential matrix, leads to an infinite regress. This is because each variable in each equation is itself the result of two other answers specified by the other two equations. Because of this endless cycle, and because one variable is the product of two others, this strongly suggests that the only values that could work are the identity values which are either zero, one or both for each operator:n

_{1}= n_{2}[operator_{1}] n_{3}n

_{2}= n_{1}[operator_{2}] n_{3}n

_{3}= n_{1}[operator_{3}] n_{2}

- 0 + 0 = 0;
- 0 - 0 = 0;
- 0 * 0 = 0 or 1*1 = 1;
- 1 / 1 = 1
An identity value - one that doesn't change the first variable when a second is joined to it by the operator, yielding an answer - is the only thinkable value that keeps this infinite regress still. In otherwords, if repeating the operation indefinitely leads to the same answer at all times, the regress becomes irrelevant. This implies that a logical "and" is applicable when there is more than one operator: that the value that'll fit this system is one that

allkinds of operators found in the system (at most, three different ones) will leave unchanged. For example, the systemwill only be self-consistent for all zeroes because zero is the only identity value that's common to all three of them. It's the only value that causes the infinite regress to stay in the same place for all iterations and thus make it irrelevant, much like dividing zero by a non-zero number [that's increased by one for each cycle] using long division. [The quotient stays the same no matter how many iterations you go through.]n

_{1}= n_{2}+ n_{3}n

_{2}= n_{1}* n_{3}n

_{3}= n_{1}- n_{2}This implies that a system with no common identity value such as

has no finite solution.n

_{1}= n_{2}+ n_{3}n

_{2}= n_{1}* n_{3}n

_{3}= n_{1}/ n_{2}I realize that this is the reasoning that appeared in less detail in the discussion of all three-variable self-referential matrices. It seems necessary to go into more detail here, because this case is less intuitively obvious than the other due to the mixing of operators, but the reasoning for this case and the "pure" ones is the same: the identity value is the only possible one because it renders irrelevant the infinite regress inherent in such a system

This covers the simplest, three-equation case. To build up from there, I'll be concentrating on one operator at a time:

6b. Addition and subtractionFor a simple additive variable - a k stuck at the end of each equation - the fact that the system without them has to add up to, at most, all zeroes and all ones shows that slapping a real variable of the same kind as the n's in one or more of the system equations will foul things up completely unless the k for each equation is itself equal to zero, as shown below. This logic applies no matter what the original system is, nor does it matter how many k's there are. For a system of three equations (the simplest case):

n

_{1}= n_{2}[operator] n_{3}+ k_{1}n

_{2}= n_{1}[operator] n_{3}+ k_{2}n

_{x}= n_{1}[operator] n_{2}+ k_{x}It has been shown in

6athat the system minus the k's leads to the four trivial solutions. So, this reduces to:or0 = 0 + k

_{1}0 = 0 + k

_{2}0= 0 + k

_{x}1 = 1 + k

_{1}1 = 1 + k

_{2}1= 1 + k

_{x}For both of them, this leads to:

In otherwords, when you add a new variable k to an already-existing system like that of the above, the k's end up equaling zero.k

_{1}= k_{2}=…= k_{x}= 0The same holds true for a system of x equations, as shown below, when you add another layer of variables to a system that has already led to one of the four trivial solutions:

where kn

_{1}= n_{2}[operator]…[operator] n_{x}+ k_{1}n

_{2}= n_{1}[operator]…[operator] n_{x}+ k_{2}[etc.]n

_{x}= n_{1}[operator]…[operator] n_{x-1}+ k_{x}_{1}to k_{x}are reals and can also be zero - i.e., there isn't one for the corresponding equation. The only two possible values of the preexisting system, omitting the k's, are all zeroes or all ones. Substituting that in gives you eitheror0 = 0 + k

_{1}0 = 0 + k

_{2}[etc.]0= 0 + k

_{x}1 = 1 + k

_{1}1 = 1 + k

_{2}[etc.]1= 1 + k

_{x}For both of them, this leads to:

k

_{1}= k_{2}=…= k_{x}= 0It's important to note that this does not only apply to all k's, it also applies to

anyk's. In otherwords, if you have one or more variables added to a pre-existing system, and this matrix can be of any size greater than two, this or these added variables have to be zero: that's what the equations above show. Because a pre-existing system stands on its own, you can add as few as one or as many as (x+1) additive variables to the kernel: the equations work out to the same thing, all k's equaling zero. Thus, since n's and k's are the same type of variable, an n-variable added to:leads to that variable or those variables being equal to zero - and if one equals zero, all of them have to as shown in

- any pre-existing system equation;
- all of them;
- or any other possibility in between
2to4above, unless zero is impossible for the others; in that case, there isn't any solution, even the trivial one of all zeroes.

6c. Multiplicative systemsThis time, you're multiplying one or more k's to the last term in the equation. Here's the system for the simplest three-variable case:

n

_{1}= n_{2}[operator] n_{3}* k_{1}n

_{2}= n_{1}[operator] n_{3}* k_{2}n

_{3}= n_{1}[operator] n_{2}* k_{3}The operators can be any one of the four. And, once again, a particular k can be there or not; if you don't want it there, simply make it equal to one and it'll disappear. Because the three-variable matrix reduces to the trivial solutions, the pre-existing system, plus whatever k's are stuck at the end, reduces to

or0 = 0 * k

_{1}0 = 0 * k

_{2}[etc.]0= 0 * k

_{x}1 = 1 * k

_{1}1 = 1 * k

_{2}[etc.]1= 1 * k

_{x}For the second case, the k's reduce to all ones. For the first, the k's can be anything. And this holds up for a system of x equations too - i.e., a preexisting matrix to which another layer of k's is added:

n

_{1}= n_{2}[operator]…[operator] n_{x}* k_{1}n

_{2}= n_{1}[operator]…[operator] n_{x}* k_{2}[etc.]n

_{x}= n_{1}[operator]…[operator] n_{x-1}* k_{x}The other operators can be any one of the four. And, once again, a particular k can be there or not; if you don't want it there, simply make it equal to one and it'll disappear. Once again, the pre-existing system plus whatever k's are stuck at the end, reduces to

or0 = 0 * k

_{1}0 = 0 * k

_{2}[etc.]0= 0 * k

_{x}1 = 1 * k

_{1}1 = 1 * k

_{2}[etc.]1= 1 * k

_{x}The k's have to be one if the pre-existing solution is all ones, although the k's can be anything if the equation sums up to all zeroes. Note that you can have as few as one or as many as (x+1) k's added to the original matrix: in otherwords, the same reasoning with regards to the any-to-all range in

6bapplies here.

6d. Divisive systemsHere, we're adding k's with the division operator, starting with the simplest three-variable case which has been shown in

6ato lead to the trivial solutions:n

_{1}= n_{2}[operator] n_{3}/ k_{1}n

_{2}= n_{1}[operator] n_{3}/ k_{2}n

_{3}= n_{1}[operator] n_{2}/ k_{3}The operators can be any one of the four. And, of course, a particular k can be there or not; if you don't want it there, simply make it equal to one and it'll disappear. Because the three-variable matrix reduces to the trivial solutions, the pre-existing system, plus whatever k's are stuck at the end, reduces to

or0 = 0 / k

_{1}0 = 0 / k

_{2}[etc.]0= 0 / k

_{x}1 = 1 / k

_{1}1 = 1 / k

_{2}[etc.]1= 1 / k

_{x}In the case of all zeroes, the k's can be anything except zero - but for the all-ones case, the k's have to be one. This is also true for the x-variable case when another layer of k's are added:

n

_{1}= n_{2}[operator]…[operator] n_{x}/ k_{1}n

_{2}= n_{1}[operator]…[operator] n_{x}/ k_{2}[etc.]n

_{x}= n_{1}[operator]…[operator] n_{x-1}/ k_{x}Like the other systems, the operators in the original matrix can be any one of the four, and a particular k can be there or not; if you don't want it there, simply make it equal to one and it'll disappear. Once again, this reduces to

or0 = 0 / k

_{1}0 = 0 / k

_{2}[etc.]0= 0 / k

_{x}1 = 1 / k

_{1}1 = 1 / k

_{2}[etc.]1= 1 / k

_{x}So, all k's that are stuck there - and this can be any number from one to x - have to be equal to one if the pre-existing matrix reduces to all ones. If they're all zeroes, the k's can be anything except zero. And this implies, if you stick n's in their place, which is permissible because k's and n's are the same type of variable, a one there means the entire system has to be all ones.

6e. Exponential systemsThis involves adding, subtracting, multiplying or dividing k

^{b}'s to the system as above, with b being a real number. Simply substitute l_{I}=k^{b}_{i}where appropriate and the exponential term(s) added reduce to one of the above, with the important qualification that if b is negative, taking the absolute value turns multiplication into division or the reverse. This is necessary as the k's equal zero or one for a multiplicative system but have to be one for a divisive system.

6f. Tying it togetherThe reason I've been repetitive about the any-up-to-all condition for the k's is that it is crucial to tying it together. It means that the operator joining a particular k to the corresponding equation can be one of the four:

n

_{1}= n_{2}[operator]…[operator] n_{x}[operator_{1}] k_{1}n

_{2}= n_{1}[operator]…[operator] n_{x}[operator_{2}] k_{2}[etc.]n

_{x}= n_{1}[operator]…[operator] n_{x-1}[operator_{x}] k_{x}Now, in order to move from x to (x+1), we need only specify that there be one k for each equation, that all k's have to be equal and that the variable they're equal to is n

_{x+1}:n

_{1}= n_{2}[operator]…[operator] n_{x}[operator_{1}] n_{x+1}n

_{2}= n_{1}[operator]…[operator] n_{x}[operator_{2}] n_{x+1}[etc.]n

_{x}= n_{1}[operator]…[operator] n_{x-1}[operator_{x}] n_{x+1}The next thing is to put in the appropriate equation for n

_{x+1}:n

_{1}= n_{2}[operator]…[operator] n_{x}[operator_{1}] n_{x+1}n

_{2}= n_{1}[operator]…[operator] n_{x}[operator_{2}] n_{x+1}[etc.]n

_{x}= n_{1}[operator]…[operator] n_{x-1}[operator_{x}] n_{x+1}n

_{x+1}= n_{1}[operator]…[operator] n_{x}For this last equation, we just have to note that all of the terms that make it up are either one or zero, depending upon the composition of the original matrix, because that exactly is what we are assuming: that the system reduces to the trivial ones for a system of x variables. Since equation (x+1) is composed of all of these, and nothing more, it has to result in zero or one too. The cases in

6cand6dwhere the k's can be anything don't apply because the k's have to equal a combination of the other terms, all of which have been shown to be either zero or one, and they have to be equal to each other. In otherwords:The only possible solutions, if you disallow division by zero, are zero, one and two. But note that the only allowable system for two is multiciplative and divisive ones where the other variables are all zeroes. In otherwords:n

_{x+1}= [0 or 1] [operator] [0 or 1]n

_{x+1}= 0 [operator] 0And the only solution for n

_{x+1}, given these two variables, is either zero or, for division, undefined. In otherwords, if the "k's-can-be-anything" condition applies, substituting n_{x+1}for the k's and building the matrix up that way leads to the same old zero or one: the only conditions where the k's don't have to equal zero or one imply that the corresponding n_{x+1}has to be zero once the definition is followed. Thus, the stepladder proof is complete.

This is essentially complete except for the conclusion and an important digression.