Site hosted by Angelfire.com: Build your free website today!





NO-THING, SOME-THING & THE UROBOROS:

G. SPENCER-BROWN'S LAWS OF FORM
The knowledge of the ancients was perfect. How so? At first, they did not yet know there were things. That is the most perfect knowledge; nothing can be added. Next, they knew that there were things, but they did not yet make distinctions between them. Next they made distinctions, but they did not yet pass judgements on them. But when the judgements were passed, the Whole was destroyed. With the destruction of the Whole, individual bias arose.

- Chuang Tzu.(1)

Anyone who thinks deeply enough about anything eventually comes to wonder about nothingness, and how something (literally some-thing) ever emerges from nothing (no-thing). Astrophysicists once solved this with a "steady-state" universe in which matter came into existence at just the right rate to keep everything uniform in an expanding universe. Now this theory has been discredited and it is now accepted that there was once a "big bang" out of which everything emerged. That's why things are still expanding. The big question then becomes whether there is enough matter that the universe will eventually stop expanding, then contract back to the original big-bang point. What happens then depends on who you believe. More recently still, it has appeared that the expansion is faster than would be consistent with the big bang unless there is also an anti-gravity force in the universe. Einstein once proposed just that, calling it the "cosmological constant". He later considered this the biggest mistake he made in his life. It may be that it was no mistake at all. Of course, none of this really addresses how something can emerge out of nothing.

Nearly all races and cultures have creation myths which try to explain how the world began, how something emerged from nothing. Buddhism is especially concerned with the question of something and nothing. It directly addresses the concept of "no-thing", that which is left when every quality has been stripped away.

A mathematician, G. Spencer-Brown (the G is for George) made a remarkable attempt to deal with this question with the publication of Laws of Form in 1969. In the 1950's he left the safe confines of his duties as a mathematician and logician at Cambridge and Oxford to work for an engineering firm that specialized in electronic circuits, including those necessary to support the British railways system. Networks are composed of a series of branching possibilities: left or right, this way or that way. At each junction, a choice must be made between several possibilities. By mathematical manipulation, a choice between multiple branches can be reduced to a series of choices between only two possibilities. Thus network design involved virtually identical problems with logic, where one constructs complex combinations of propositions, each of which can be either true or false. Because of this, the firm hoped to find in Spencer-Brown a logician who could help them design better networks. Spencer-Brown in turn tried to apply a branch of mathematics known as Boolean algebra to their problem, initially to little avail, as we will see. But first we need to know a little about Boolean algebra.

Boolean Algebra

Pure mathematics was discovered by Boole in a work which he called The Laws of Thought.

- Bertrand Russell.(2).

By the mid-19th century, mathematics was undergoing a sea-change. Where previously mathematics had been considered the "science of magnitude or number", mathematicians were coming to realize that their true domain was symbol manipulation, regardless of whether those symbols might represent numbers. In 1847, in a little pamphlet called The Mathematical Analysis of Logic, George Boole [1815-1864] clearly defined this new view in his presentation of his calculus(4) of logic:

We might justly assign it as the definitive character of a true Calculus, that it is a method resting upon the employment of Symbols, whose laws of combination are known and general, and whose results admit of a consistent interpretation…It is upon the foundation of this general principle that I propose to establish the Calculus of Logic, and that I claim for it a place among the acknowledged forms of Mathematical Analysis.(5)

This was the first time anyone had actually presented a fully developed symbolic representational system for logical relationships. These symbols could then be manipulated using an algebra(6) to determine whether complex logical relationships were true or false. Boole developed these ideas still further in 1854, with his Laws of Thought.(7) There he presents an even more ambitious purpose, no less than capturing the actual mechanics of the human mind.

The design of the following treatise is to investigate the fundamental laws of those operations of the mind by which reasoning is performed; to give expression to them in the symbolical language of a Calculus, and upon this foundation to establish the science of Logic and construct its method; to make that method itself the basis of a general method for the application of the mathematical doctrine of Probabilities; and, finally, to collect from the various elements of truth brought to view in the course of these inquiries some probable intimations concerning the nature and constitution of the human mind.(8)

With some degree of hyperbole, Bertrand Russell once remarked that "pure mathematics was discovered by Boole in a work which he called The Laws of Thought." In large part, Russell's overstated claim arose out of Russell's own ambition to reduce first all mathematics, then eventually all human thought, to logic. The culimnation of the effort was Russell and Whitehead's massive three-volume work Principia Mathematica(9), which was named (perhaps through hubris) after Newton's most famous volume. Boole was more realistic; even in the throes of his creation, Boole understood that there was more to the mind than logic. In a pamphlet Boole's wife wrote about her husband's method, she said that he told her that when he was 17, he had a flash of insight where he realized that we not only acquire knowledge from sensory observation but also from "the unconscious."(10) In this discrimination, Boole was amazingly modern.

Algebra vs. Arithmetic

To find the arithmetic of the algebra of logic, as it is called, is to find the constant of which the algebra is an exposition of the variables--no more, no less. Not just to find the constant, because that would be, in terms of arithmetic of numbers, only to find the number. But to find how they combine, and how they relate--and that is the arithmetic.

- G. Spencer-Brown.(11)

Spencer-Brown quickly discovered that the complexity of real world problems far exceeded those he had studied in an academic setting. He started out using traditional Boolean algebra, but found he needed tools not available in Boolean algebra. In essence he needed an arithmetic, which was a problem as Boolean algebra was commonly considered the only algebra that doesn't have an arithmetic. Now what is the difference between arithmetic and algebra? Put most simply, arithmetic deals with constants (the familiar numbers 1, 2, 3,…for the arithmetic we all grew up learning to use), while algebra deals with variables. Again, if you cast your mind back to the algebra you may have taken in junior high school, high school or college, variables are simply symbols which can stand for unknown constants. That is, an X or a Y or a Z might represent any number at all in an equation.

2 + 5 = 7; (56 -12) · 11 = 484; 52 = 25. These are all arithmetic statements since they involve constants. 2X + 5Y = 7Z is algebra, since a variety of numbers can be substituted for the variables X, Y, and Z and still satisfy the equation. For example, if X=1, Y=1, Z=1, then we simply have the arithmetic statement we already mentioned: 2 + 5 = 7. But many other combinations of X, Y, and Z would also satisfy the equation; e.g., X=9, Y=2, Z=4 will do (i.e., 18 + 10 = 28).

Boole had formed his logical algebra by close analogy to the normal algebra of numbers. "+" meant "union"; "X" meant "intersection." Today the symbols "" and "" have replaced "+" and "X" for "union" and "intersection" respectively, but otherwise things are unchanged. Since we're only involved with two possibilities--true or false--it's also handy to have a symbol--"~"--to use for "not" or "negation", thus switching a statement to its opposite. Its equivalent of numbers were simply "true" and "false"; that is, any statement in Boolean algebra had to be solved such that it reduced to either being true or false. Instead of numeric variables, Boolean algebra used logical variables: an X or a Y or a X might stand for a logical statement that could be either true or false. For example, X might be the statement that all pets are either dogs or cats. Y might be the statement that Bob's pet is a cat. Z the statement that Bob's pet is a dog. So an equation X + ~Y = Z would say that saying "all pets are either dogs or cats" and "Bob's pet is not a cat" means that "Bob's pet is a dog."

Boole's concept of making his algebra almost exactly parallel to numerical algebra (in the symbolic form that it was normally presented), made it easier for later mathematicians to understand and accept (though, as is unfortunately all too usual, that had to await his death.) But the symbol system most usual for numeric algebra isn't necessarily the best for logical algebra. In practice, complicated logical statements lead to complicated Boolean equations which are difficult to disentangle in order to determine whether or not they are true. And the absence of an arithmetic underlying the algebra meant that one could never drop down into arithmetic to solve a complex algebraic problem.

Since computers and other networks deal with just such binary situations--yes or no, left or right, up or down--it was natural to look to Boolean algebra for answers for network problems. But because Boolean algebra had developed without an underlying arithmetic, it was exceptionally difficult to find ways to deal with the problems.

The situation might have been different if some enterprising mathematician like Spencer-Brown had developed an arithmetic that only involved two values, then generalized it to an algebra. That isn't as far-fetched as it sounds, since modulo arithmetic has a long history in mathematics, and modulo 2 would be a natural starting point. Modulo arithmetic concerns itself with what is left over (if anything) when you divide numbers by some smaller number. 3 modulo 2 = 1, since you have a remainder of 1 when you divide 3 by 2. 512 modulo 2 = 0, since there is no remainder when you divide 512 (or for that matter, any even number) by 2. But since Boolean algebra developed to deal with logic, mathematicians only afterwards realized it dealt with any binary situation, and it didn't occur to them to go back and find an underlying arithmetic.

So Spencer-Brown was forced backwards into developing an arithmetic for Boolean algebra simply to have better tools with which he could work. As with so many of the hardest problems encountered in mathematics, what he really needed was an easily manipulable symbol system for formulating the problems. Mathematicians had grown so used to Boole's system, which was developed as a variation on the normal algebra of numbers, that it never occurred to them than a more elegant symbolism might be possible. The one he finally developed, after much experimentation over time, is seemingly the most basic symbol system possible, involving only the void and a distinction in the void.

No-thing and Some-thing

Nothing is the same as fullness. In the endless state fullness is the same as emptiness. The Nothing is both empty and full. One may just as well state some other thing about the Nothing, namely that it is white or that it is black or that is exists or that it exists not. That which is endless and eternal has no qualities, because it has all qualities. The Nothing, or fullness, is called by us the Pleroma.

C. G. Jung.(12)

Imagine nothingness. Perhaps you envision a great white expanse. But then you have to take away the quality of white. Or perhaps you think of the vacuum of space. But first you have to take away space itself. Whatever the void is, it has no definition, no differentiation, no distinction. When all is the same, when all is one, there is no-thing, nothing. And, as Jung says in the above quote, "nothing is then same as fullness."

Then make a mark, a distinction. As soon as that happens, there is a polarity. Where before there was only a void, a no-thing, now there is the distinction and that which is not the distinction. Now we can speak of "nothing" as some-thing, since it is defined by being other than the distinction.

Don't throw up your hands in disgust at this. Let's bring it down to earth by an example. For our void, our nothingness, imagine a flat sheet of paper. Let's pretend that it has no edges, that it keeps extending forever. In mathematics this is called the plane. Of course, this infinitely extended piece of paper isn't really nothing, but it is undifferentiated--every part of it is the same as every other part. So it can at least be a representation of nothing. Now draw a circle in it, as below. Following Spencer-Brown's terminology, we'll call this the "first distinction."

Where before there was no-thing, drawing the circle, a mark, a distinction, has created two things: an inside and an outside. Of course, it's entirely arbitrary what we term these two things. But let the circle and what it encloses be considered the distinction, the mark, and what is outside "not the mark". Now, of course, any distinction whatsoever would do. Any difference one could make which would divide a unitary world into two things would be a proper distinction. Freudians like to point to an infant's discovery that the breast is separate from itself as the first distinction that leads to consciousness. But there are infinitely many distinctions possible within the world.

Now let us flesh out this space we have created, discover its laws. Start by drawing a second circle beside the one we've already drawn. Imagine you are blind and wandering around the plane represented above. You bump up against one of the circles and pass inside. After wandering around inside a while, you come up against the edge of the circle again and pass outside. Wandering some more, you encounter the edge of the second circle and again pass inside, then later outside. Is there any way you could possibly know that there were two circles, not one? How could you know whether you had gone into one of the circles twice or into both circles once? All you could know was that you had encountered an inside and an outside. Hence for all practical purposes, two distinctions (or three, or a million) are the same as one. Nothing (remember literally no-thing) has been added.

Let's call these "paired distinctions" for simplicity. Spencer-Brown calls this the law of condensation; i.e., paired distinctions condense into a single distinction. Are there any other laws we have to find about this strange two-state space? Bear with me, there is only one other situation to consider. Let's go back to our original circle, the "first distinction." Let us draw a second circle, but this time draw it around the first, creating nested circles.

Once more imagine you are blind, wandering around the plane. You encounter the edge of a circle and pass within, thus distinguishing inside and outside. Once inside, you wander some more, then again you encounter the edge of a circle and pass outside. Or did you? Perhaps the edge you encountered was the edge of the inner circle and you passed within it. You are not able to distinguish between the inside of the inner circle and the outside of both circles. Two insides make an outside.

Let's assume that the outer circle stretches farther and farther away from the inner circle until you are no longer aware that it even exists. As far as you are concerned there is only the single circle through which you pass inside or outside. But a godlike observer who could see the whole plane would realize that when you passed inside the inner circle, you were actually reentering the space outside the outer circle. It all depends on how privileged your perspective. Nested distinctions erase distinction. Spencer-Brown refers to this principle as the law of cancellation.

Laws of Form

Although all forms, and thus all universes, are possible, and any particular form is mutable, it becomes evident that the laws relating such forms are the same in any universe.

- G. Spencer-Brown.(13)

These two laws are the only ones possible within the space created by a distinction. No matter how many distinctions we choose to make, they simply become combinations of paired or nested distinctions.

These almost transparently obvious laws are all that Spencer-Brown needed to develop first his full arithmetic, then his algebra. In proper mathematical form, they are presented as axioms from which all else will be derived, but there is something unique going on here. In any formal mathematical system axioms are not open to examination themselves. They are primitive assumptions beyond questions of true or falsity. The remainder of a system is then developed formally from those primitives. But Spencer-Brown's axioms seem to be reasonable conclusions about the deepest nature of reality. They formally express the little we can say about something and nothing.

This is one of several reasons why Laws of Form has been either reviled or worshiped. Mathematicians and logicians are deeply suspicious of any attempt to assert that axioms might actually be assertions about reality. For over two thousand years, the greatest minds believed that Euclid's geometry was not only a logically complete system, but one that could be checked by reference to physical reality itself. Only with the development of non-Euclidian geometries in the nineteenth century did it become apparent that Euclid's axioms might be merely arbitrary assumptions, and that a different set of assumptions could lead to an equally complete and consistent geometry.

Once bitten, twice shy--mathematicians became much more concerned with abstraction and formality. They separated what they knew in their mathematical world from what scientists asserted about the physical world. Of course, mathematics continued to surprise and delight by anticipating necessary developments in science, but that issue was left for philosophers to worry about. And since philosophy was increasingly becoming a branch of mathematical logic, this created a vicious circle. So Spencer-Brown's somewhat mystical starting point definitely went against the grain of modern mathematics.

Let's consider the elegant symbol system Spencer-Brown used to express and manipulate distinctions. Instead of our example of a circle in a plane, let the following mark represent distinction:

Our two laws then become:

=

and

=

Using only those two laws, the most complex combinations of marks can be reduced either to or to ; that is, to a cross (to use Spencer-Brown's term for this symbol) or to nothing. As an example, try to see whether the following reduces to a cross or to nothing.

Though any combination of marks, no matter how complex, can be reduced using this simple arithmetic, Spencer-Brown found it useful to extend the arithmetic to an algebra by allowing variables; i.e., alphabetic characters that stand for combinations of marks. For example, the letters p or q or r might each stand for some complex combination of marks. He then developed theorems involving combinations of marks and variables which would be true no matter what the variable might be. Since his whole point was to develop the arithmetic which underlay Boolean algebra, of course the algebra he developed was equivalent to Boolean algebra. But, as he points out, the great advantage is that since his arithmetic was totally indifferent to what two-valued system it was applied to, the resulting algebra is equally indifferent to its application. It can certainly be applied to logic, as Boolean algebra was, but it can equally well be interpreted as an algebra of any two-valued system.

As with so many highly original ideas, Spencer-Brown's system has been largely ignored. Those who, like the author, consider it brilliant and profound(14) are balanced by those who think that Spencer-Brown has merely "reinvented Boolean algebra but in an obscure notation."(15). I can readily understand this position as for years I refused to do more than casually glance at Laws of Form, put off by the strangeness of the presentation. Only when I actually worked through the mathematics carefully from beginning to end did I grasp just how profound and powerful this system was. Though excited almost instantly by the elegance of the demonstrations, I still found it extremely challenging to grasp. There are, for example, 30 pages of notes to describe 76 pages of the main text. I had to move back and forth between notes and text (and then eventually to the appendices where he interpreted the total system of algebra and arithmetic for logic). Many is the time I wished there were 76 pages of notes for 30 pages of text, rather than the other way around. But the difficulty lies less in Spencer-Brown's presentation (which is admittedly idiosyncratic) than in the originality of the conception; i.e., paradoxically, there is no simple way to present a system quite this simple.

Self-Reference, Imaginary Numbers, and Time

Space is what would be if there could be a distinction. Time is what would be if there could be oscillation.

- G. Spencer-Brown.(16)

Hopefully, the first of these rather oracular statements now makes sense. Francisco Varela has called the latter "in my opinion, one of his most outstanding contributions."(17) Let's see if we can bring equal sense to it.

In solving many of the complex network problems, Spencer-Brown (and his brother, who worked with him) used a further mathematical trick which he avoided mentioning to his superiors, since he couldn't then justify its use. He had been working with his new techniques for over six years and was in the process of writing the book that became Laws of Form when it finally hit him that he had made use of the equivalent of imaginary numbers within his system.

Imaginary numbers evolved in mathematics because mathematicians kept running into equations where the only solution involved something seemingly impossible: the square root of -1 (symbolized by .) If you will recall from your school days, squaring a number simply means multiplying it by itself. Taking the square root means the opposite. For example, the square of 5 is 25; inversely the square root of 25 is 5. But we've ignored whether a number is positive or negative. Multiplying a positive number times a positive yields a positive number; but also multiplying a negative number times a negative number also yields a positive number. So the square root of 25 might be either +5 or -5. But what then could the square root of a negative number mean?

This was so puzzling to mathematicians that they simply pretended such a thing could not happen. This wasn't the first time they had done this. Initially negative numbers were viewed with the same uneasiness. The same thing happened with irrational numbers such as the square root of 2 (an irrational number cannot be expressed as the ratio of two integers). Finally, in the 16th century, an Italian mathematician named Cardan had the temerity to use the square root of a negative number as a solution for an equation. He quickly excused himself by saying that, of course, such numbers could only be "imaginary." The name stuck as more and more mathematicians found the technique useful, and the symbol for became i.

Spencer-Brown had come up with an equivalent situation is solving network problems. Instead of the square root of a negative number, he found equations like where a variable was forced to refer to the cross of itself, like E2 and E3 below:


Remember that f has to stand for some combination of marks that ultimately reduces to either a or (i.e., a cross or nothing) . There is no problem with the first equation, where it works for either or . But in the second equation, if we assume that3 = , then

f3 =     . Similarly, if f3 = , then f3 = . Just as with imaginary numbers, we are dealing with an impossibility, in this case caused by self-reference.

Spencer-Brown had simply used these impossible numbers without understanding what they meant. With the realization that these were equivalent to imaginary numbers, he not only understood what they represented, but had an insight to how imaginary numbers could be interpreted as well: both being an oscillation that created time in a static space.

Early in the twentieth century there was a huge effort to formalize mathematics. This culminated in the attempt by Bertrand Russell and Alfred North Whitehead to reduce mathematics to logic. The fly in the ointment was self-reference. Russell had discovered a paradox in logic involving the "set of all sets which do not contain themselves." Since that's a little intimidating, let's consider the popular version which Russell called "The Barber Paradox." Imagine a town with only one barber. Every male in the town either shaves himself or is shaved by the barber. The two groups are mutually exclusive; i.e. there is no one who both shaves himself and is sometimes shaved by the barber. So far so good, now here comes the paradox: who shaves the barber? If we say that he shaves himself, then he can't be shaved by the barber, but he's the barber, so he did shave himself. That won't work. Well, then if we say that the barber shaved him, well he's the barber, so he shaved himself. Neither way will work logically. We're stuck in a paradox.

Russell found himself unable to resolve the paradox he had discovered. Finally, he resorted to a trick: he defined a "theory of classes" in which by definition a class could not refer to itself. That meant that he had to have "classes of classes" at a different level of logic than "classes", and "classes of classes of classes" at a different level than "classes of classes"; ad infinitum. Working with such a Rube Goldberg system led to more and more difficulties. The result, Principia Mathematica, was so large that Russell had to hire an old four-wheeled cart to carry the manuscript to the publisher.

Then in 1931, mathematician Kurt Gödel turned Russell's theory of classes on its head and used self-reference to prove that any such system as that laid out in Principia Mathematica was either incomplete or inconsistent; that is, there were true mathematical statements which could not be derived from the axioms of the system, no matter how many additional axioms were added. The essential problem was that mathematics (and science and art and any other product of the human mind) is bigger than logic.

Since then, under the name of "iteration" in computer programming, self-reference has slowly moved from something to be avoided at all costs to a normal and necessary state of affairs. For example, computer programs commonly count the number of times a sub-routine has run by adding an instruction like "n = n + 1", then checking the value of "n" to see if the sub-routine has run enough times. It is understood that the "n" on the left side of the equation is a later stage than the "n" on the right side. Time has entered the picture. But note that this time is dimensionless. We can't say that one "n" is a day or an hour or a minute or a second later than the other "n"; all we know is that one state of "n" is later than the other state. This is analogous to how we created a space without dimension by the simple act of making a distinction.

Spencer-Brown realized that his simple but puzzling little equation brought time at its simplest manifestation into the timeless world of his Laws of Form. Such equations simply "oscillate" between one value and another, just as imaginary numbers provide the possibility of oscillating between values that lie first on the real number line, than off it, then on it again, and so forth. Francisco Varela simplified the expression of this equation still further by adding a new symbol which he called "self-cross" by analogy with Spencer-Browns' "cross."

When we look at the self-cross, with a little imagination we can visualize the ancient symbol of the uroboros: the snake curling back to swallow its own tail. As with so many symbols, the uroboros conceals a whole philosophical history within its simple form. How does something emerge from nothing? How does the one evolve into the many? How does space emerge? how does time emerge? G. Spencer-Brown'sLaws of Form starts from the most basic mathematical position possible, produces something from nothing, shows how it evolves into many, how space emerges, then time. And in the process ends where the ancients ended, with the uroboros.

1. Ch'u Chai, The Story of Chinese Philosophy, pp. 106-7.

2. (3)

3. Quotation in Carl B. Boyer, A History of Mathematics, p. 634, among other sources.

4. The word "calculus" is used simply in the sense of a system of calculation. Boole doesn't mean that his system in any way relates to the mathematical field of calculus jointly developed by Newton in England and Leibniz in Germany in the late seventeenth century.

5. Carl B. Boyer, A History of Mathematics (Princeton: Princeton University Press, 1995), p. 633.

6. In most ways identical in operation to ordinary mathematical algebra. We'll discuss algebra more when we return to G. Spencer-Brown.

7. George Boole, An Investigation of the Laws of Thought: On Which are Founded the Mathematical Theories of Logic and Probabilities (New York: Dover, 1854/1958).

8. George Boole, An Investigation of the Laws of Thought, p. 1.

9. Alfred North Whitehead & Bertrand Russell, Principia Mathematica (3 vols. Cambridge, U.K.: Cambridge University Press, vol. 1, 1910, vol. 2, 1912, vol. 3, 1913).

10. E. T. Bell, Men of Mathematics (New York: Simon and Schuster, 1965), pp. 446-447.

11. G. Spencer-Brown. Transcripts of tapes of conference at Esalen Institute, March 10-20, 1973, dealing with Laws of Form. Attending were Gregory Bateson, Alan Watts, John Lilly, and Kurt von Meier, among others.

12. C. G. Jung, "VII Sermones ad Mortuos," Stephan A. Hoeller, trans., in Stephan A. Hoeller, The Gnostic Jung and the Seven Sermons to the Dead (Wheaton, Illinois: A Quest Book, Theosophical Publishing House, 1982), p. 44. See C. G. Jung, Memories, Dreams, Reflections, revised edition (New York: Pantheon Books, 1973), appendix V (Septum Sermones ad Mortuos) for a different translation by Richard and Clara Winston.)

13. G. Spencer-Brown, Laws of Form, revised edition (New York: E. P. Dutton, 1979 ), p. xxix.

14. E.g., Francisco J. Varela, Principles of Biological Autonomy (New York: North Holland, 1979), pp. 106-169, Appendix B.2.

15. Paul Cull and William Frank, "Flaws of Form" (International Journal of General Systems, Volume 5, 1979), pp. 201-211. Also see B. Banaschewski, "On G. Spencer Brown's Laws of Form" (Notre Dame Journal of Formal Logic, Volume XVIII, Number 3, July 1977), pp. 507-509.

16. G. Spencer-Brown, Esalen, 1973.

17. Francisco J. Varela, Principles of Biological Autonomy (New York: North Holland, 1979), p. 138.

Return to Home Page