What is inside of you is what is outside of you, ... what you see outside of you, you see inside of you.
- The Thunder: Perfect Mind.(1)
What is "out there" apparently depends, in a rigorous mathematical sense as well as a philosophical one, upon what we decide "in here."
- Gary Zukav.(2)
In the previous chapter, we talked of dreams as gateways. The question, of course, is "gateway to where?" The world of the dream seems a very different world from the daytime world in which most of our life takes place. Yet when we are there, it seems every bit as real. When we wake with the memory of the dream, is that memory an accurate picture of the dream world? Or are we already forcing the inside into the terms of the outside? Let's begin our attempt to answer these questions with the first outer manifestation of the inner: our brain.
The Triune Brain
Unless one wished to return to a complete psychophysical dualism, it was best to consider external states as the expression of internal states, which could, moreover, exist independently of this expression.
- Marc Jeannerod.(3)
The first thing we need to appreciate about the brain is that it is really made up of a series of individual brains, each of which evolved at different points in time. Neurophysiologist Paul MacLean's research shows that there are three distinct brains which combine within what we term the brain. The most ancient, the reptile brain, is located at the top of the brain stem which leads into the spinal cord. Wrapped around it is the mammal brain, more properly the limbic system. And wrapped around it is the most recent brain, the neocortex, that almost infinitely wrinkled surface which we normally think of as the human brain. Though all three brains necessarily communicate to some extent, in large part they take care of their own business without interference from each other. And that causes some interesting complexities of human behavior.
The reptile brain, for example, handles issues of aggression, territoriality, social hierarchies, and ritual. (Note that thus some rituals predate consciousness. Our task is to construct conscious rituals.) Thus when we get "territorial" over "our" toys, "our" boy friend or girl friend, "our" status, "our" scientific discovery, the emotional response wouldn't be out of place in a dog or a cat or a bird or a fish or even a dinosaur. All react from essentially the same reptile brain, which dates back a quarter of a billion years (to when the first proto-dreams begin to appear, you may recall.)
The mammal brain "governs social awareness and relationships--belonging, caring, empathy, compassion and group preservation."(4) So the emotional behaviors that we consider to represent humanity at its best, are hardly unique to human beings. They appeared a hundred and fifty million years ago, long before human beings appeared on the scene. Given their longer histories, we would do well to look to other mammals for examples of how we could improve our own behavior in these areas. Wolves, for example, are superb exemplars for caring, compassionate parenting. When Richard Adams wrote Watership Down,(5) I think many of us were surprised how easy it was to relate to rabbits. Now it's true that they were rather anthropomorphized rabbits, but there was still a "rabbitness" to them that was both fascinatingly different, yet somehow also recognizably close to human feeling.
Finally at the top of the evolutionary ladder, the neocortex appears: the human brain. Though it first emerged in the higher mammals "several tens of millions of years ago . . . its development accelerated greatly a few million years ago when humans emerged."(6) Many neuro-scientists believe that a great deal of this brain evolved out of the need to increase visual acuity. The cognitive abilities we treasure so highly appeared as an offshoot of that new visual proficiency. (Though as we will see later in this chapter, perhaps smell precedes sight in its claim on the increased complexity of the brain.)
The fact that each of these three brains is largely independent of the others, helps explain much of our most paradoxical behaviors, in which we seem to be at war within ourselves. In his Maps of the Mind, Charles Hampden-Turner gives this ironic summary of what the world would be like if MacLean were right (and he is):
Tradition in this culture might locate precise thoughts in the mind, but vague emotions in the heart, breast, bowels, blood, nerves or viscera (which are indeed controlled by the limbic or mammalian brain.) . . . Such a culture might be split between cerebral conceptions of science and the expressive arts, the two barely on speaking terms.(7)
Novelist and Renaissance man C. P. Snow popularized this last split between the scientific and artistic worlds with his famed term "the two cultures."(8) His entire work, especially his famed series of eleven novels, Strangers and Brothers, can be seen in large part as an accurate representation of the interaction between these two views of reality over the course of the twentieth century. Though Snow's own view of the possibility of reconciling these inherent conflicts was largely pessimistic, he saw, however, hope in our ability to get beyond individual needs and desires.
But this isn't all. One looks outside oneself to other lives, to which one is bound by love, affection, loyalty, obligation: each of those lives has the same irremediable components as one's own; but there are also components that one can help, or that can give one help. It is in this tiny extension of the personality, it is in this seizing on the possibilities of hope, that we become more fully human.(9) [Author's note: or is it more fully animal?]
Though these three brains may be in conflict, they also have necessarily learned to cooperate in order to survive and flourish. The coordination between the reptile and mammal brains is, of course, better than their coordination with the neocortex, simply because they have had so much more time to learn to co-exist. Interestingly, MacLean sees our best hope for unifying the three brains in the act of creativity. We have already seen in the previous chapter that creative "search activity" in the brain produces theta waves which spread across the brain as a whole. Accordingly, part of learning to become a "trained practitioner" of the psyche is the ability to tap that creative potential as needed. In that respect, cybernetics and biofeedback expert Elmer Green comments that:
Our goal is to gain access to what's going on in the lower brain centers, where the deeper levels of consciousness reside . . . First, though, you have to get the loud noise of waking consciousness turned off, because the information that comes up from the lower brain centers is as delicate and subtle as the draft of a butterfly's wing. The instant you turn too much left cortex attention to it, the information tends to slip away.(10)
We'll talk more of such matters later in the book, but first we need to know more about the brain.
Innate Animal Behaviors
There was never a king like Solomon
Not since the world began
Yet Solomon talked to a butterfly
As a man would talk to a man.
- Rudyard Kipling.(11)
As we have seen in the previous section, the brain preserves its evolutionary history in its very structure. It has stored an immense number of behaviors that have proved useful for our ancestors in the human and animal world over vast periods of time. Certain situations repeat over and over for every member of a species. For example, frogs have to be very good at recognizing flies or they will go hungry. Humans have to be very good at recognizing human faces or they can't function within any human social structure. So there has to be a great deal of specificity in what is stored in the brain.(12) On the other hand, every creature is in large part born into a world unique to it.
For example, though all creatures are born from a mother and in a majority of species, are raised by the mother, mothers come in a great variety of different packages. Nature has to handle both the specificity and the variety. In his King Solomon's Ring, Nobel prize winning ethologist Konrad Lorenz explains that "greylag goslings unquestioningly accept the first living being whom they meet as their mother."(13) Lorenz experimented with this "imprinting" behavior and often served as surrogate mother for a variety of little creatures. In the preface, there a lovely drawing Lorenz did of himself, a friend, and a number of animals. He describes the scene:
First came a big red dog, looking like an Alaskan husky, but actually a cross between an Alsatian and a Chow, then two men in bathing trunks carrying a canoe, then ten half-grown greylag goslings, walking with all the dignity characteristic of their kind, then a long row of thirteen tiny cheeping mallard ducklings, scurrying in pursuit, forever afraid of being lost and anxiously striving to keep with the larger animals. At the end of the procession marched a queer piebald ugly duckling, looking like nothing on earth, but in reality a hybrid of ruddy sheldrake and Egyptian goose. But for the bathing trunks and the moving picture camera slung across the shoulders of the men, you might have thought you were watching a scene out of the garden of Eden.(14)
The mallards were a triumph for Lorenz, as, for quite a while, he hadn't been able to figure out how to make them see him as mother. Through some detective work, he figured out that it wasn't enough to be the first person they saw at birth; in addition, he "must quack like a mother mallard in order to make the little ducks run after me." That turned out to do the trick. Though he looked about as much like a mother mallard "as Calvin Coolidge looks like Metro Goldwyn Mayer's lion" (as James Thurber so perfectly described the wolf masquerading as Red Riding Hood's grandmother in his Fables for our Time), "anything that emits the right quack note will be considered as mother."(15)
Nor are such inherited behaviors limited to newborns. Courting rituals are also instinctive in many species. An adult male jackdaw fell in love with Lorenz and tried all the wiles that normally proved successful with female jackdaws. For example, he kept trying to feed Lorenz delicacies like ground-up worms. Like a true scientist, Lorenz suffered this disgusting diet as long as he could. When he was finally so sick of the taste of worms that he refused to open his mouth, the jackdaw filled his ear with worms, then was disappointed when this proved a less than successful romantic strategy.(16)
A baby's recognition of mother, an adult's repertoire of courting behavior, both innate. Recognition of danger is also often inborn. "Magpies, mallards or robins, prepare at once for flight at their very first sight of a cat, a fox or even a squirrel. They behave in just the same way, whether reared by man or by their own parents." In contrast, jackdaws, who we have seen are born knowing so much about love, have to be taught to recognize danger by their parents. Though they may not be born with a sense of self-preservation, they are, however, born with a innate need to protect their young at all costs. Since jackdaws--including baby jackdaws--are black, "any living being that carries a black thing, dangling or fluttering, becomes the object of furious onslaught." Poor Lorenz discovered this to his distress when he accidentally picked up a baby jackdaw and was instantly attacked by the mother. He dropped the baby, and was left with a wounded and bloody hand. Now forewarned, Lorenz systematically explored just how far this instinctual behavior extended. He found, for example, that he could safely carry his black camera, but "the jackdaws would start their rattling cry as soon as I pulled out the black paper strips of the pack film which fluttered to and fro in the breeze." This happened even though the adult birds knew Lorenz to be their friend and no threat to their children. Instinct simply took over.(17)
Though we have a considerably greater ability to take conscious control of our actions that a jackdaw, a huge amount of our behaviors are already stored away at birth in those reptilian and mammalian brains, ready to be triggered into action when needed.
The Digital Computer Model of the Brain/Mind
Man seeks to form for himself, in whatever manner is suitable for him, a simplified and lucid image of our world, and so to overcome the world of experience by striving to replace it to some extent by this image.
- Albert Einstein.(18)
Throughout history, thinkers have turned to cutting-edge science and technology for new metaphors for thought. This is enormously productive for generating new views of reality, but such metaphors also need to be viewed with some skepticism. In the late seventeenth century, for example, when Descartes looked for a model of the brain, he turned to hydraulics. Though, to us, this seems an eccentric choice, hydraulics was the most advanced technology of the time, and thus seemed a perfectly reasonable choice to Descartes.
In his model, the pineal gland, which lies behind the middle of the forehead, was "the seat of imagination and common sense." It received information from our senses, then formed an image. So far, if we substituted the brain for the pineal gland, it would sound much like the normal, though outmoded, view of perception still taught in school. But then comes the hydraulics: the pineal gland actually leans toward the "side bearing the image." Like some Rube Goldberg device, that then opened or closed certain tubes.
Sensory stimulation produced a flux of the animal spirts contained in the heart and arteries. The heart then pushed the spirits into the cerebral cavities, much as the pumps of an organ push air into its pipes. . . . After death the brain collapsed and fluid could no longer circulate.(19)
The most prevalent current cognitive model of the brain/mind views the brain as a complex digital computer, and the mind as nothing more than the programs it runs, which are stored within the structure of the brain. Because of the ubiquity of computers within the fabric of modern society, this model seems totally reasonable to many, but times change and the computer model may someday seem as outdated as Descartes' hydraulic model.
With this disclaimer in mind, let's look at the computer as a metaphor for the brain/mind. The same computer can run many different programs, just as a human can perform many different behaviors. Both the computer and the brain store their programs in some sort of long-term memory until they are needed. In addition to the bread-and-butter programs, the computer needs a special program, normally called an operating system (O/S), which operates at a higher level than any of the other programs. An O/S is like a foreman in a factory who keeps things running smoothly. The O/S knows which programs are running in the computer, which are waiting to run, and which have already run. However, the O/S doesn't decide which programs need to be run, just as the foreman doesn't decide what products a factory should make. This is an executive decision; a human operator tells the computer which programs need to be run in what order of importance. The O/S then schedules the programs, locates them in its long-term memory, runs them when it has the time and resources to do so, prints the results and stores the programs away again for later use.
Note that there are at least three levels of operation at work here:
(1) the executive level which decides which programs should be run;
(2) a foreman level which keeps things running smoothly; and
(3) a worker level which does the actual work we associate with the computer.
By analogy, driving a car requires an executive decision on one level of the psyche, the organization and supervision of the necessary behaviors on a second level, and the actual performance of the behaviors on a third level. It is important to realize that only the executive level necessarily involves consciousness. The foreman and worker level can proceed famously without conscious intervention. This latter is an important concept that we can learn from examining this model, even if the model itself proves inadequate.
Just to mention a few diverse examples of the "programs" stored in our brains, we might have stored a program that tells us how to play tennis, another program that enables us to operate a computer much as I am doing now as I write this book. Of course, those high-level programs would in turn be composed of many increasingly smaller programs that would range down to the level of directing the appropriate parts of our body to move. All of us have a series of highly complex programs to draw on in social settings. These programs may or may not have anything to do with consciousness. For example, another program might drive our car for us, even if our conscious attention is elsewhere. The wide variety of programs available to a normal person is almost uncountable.
Though the situation is not as clear as with a computer, we also appear to have a higher-level "foreman" function similar to the operating system of a computer, something which keeps track of the total system, loads and runs programs as necessary, allocates resources where needed, etc. Now already various theorists split at this point, with some arguing against the need for any such central program; it is, however, convenient for our illustration of the digital computer model. Again, consciousness doesn't appear to be involved in this centralized program, any more than it is in the running of a particular application program. This is not to deny that consciousness can enter the scene and take control of a previously unconscious program; it's just to assert that the human psyche is able to operate quite well even in the limited role of a computer, totally devoid of consciousness.
Computer programs are highly organized groups of abstract symbols, specific to the computer within which they are intended to operate. They are stored in some form of relatively permanent memory accessible by the computer, either by being permanently attached to the computer, or by being able to be read into the computer when necessary. The programs of the human psyche must be to some extent similar. The storage might be something obvious, like a record of synaptic changes in the brain, or less obvious, like a pattern of musculature in the body. Or perhaps some of the programs are not stored in the body at all. Rupert Sheldrake has speculated that the brain operates more like a radio which receives information transmitted from outside itself via radio waves. But the storage need not concern us at this point.
The operating system of the human psyche, like the operating system of a computer, necessarily operates at a higher level than the application programs of the psyche. Earlier, I used the analogy of "workers" for the application programs and "foreman" for the operating system, with the "executive" on top making the decisions that the "foreman" organizes and the "workers" carry out. In the psyche, consciousness seemingly forms the highest level of that analogy: the decision maker. If our analogy between computer and human mind can be extended this far, the operating system must also be a program, albeit a highly complex program. In a computer, the most significant parts of the operating system are usually stored in part in a permanent, extremely compact form on one or more computer chips. This makes this part of the operating system incredibly fast at performing necessary repetitive tasks. The remainder of the operating system is stored as software, much like any other program, except that its function is to control the entire computer.
The human equivalent to the part of the operating system stored on permanent computer chips might be the entire structure of the human body, with a special emphasis on the DNA structure in the genes, and the nervous system culminating in the human brain. But the entire body structure contributes to the control of itself in ways much more complex than any existing computer. However, the analogy is still close enough for our purposes so we will leave it there for now.
…the brain is a selective system more akin in its working to evolution than to computation or information processing.
- Gerald M. Edelman.(20).
In our discussion of MacLean's triune model of the brain, we saw that our brain is actually a repository of hundreds of millions of years of evolutionary experience. Konrad Lorenz provided us with examples from the animal world of complex, innate behaviors for mother/child interaction, mating, and recognition of danger, among many others. Similar behaviors are, of course, also innate in humans, though we have more conscious control over exercising them.
Charles Darwin's theory of evolution by natural selection provides the most commonly accepted explanation for how such behaviors can be innate. Take the adult jackdaw's instinctive attack of anyone grabbing a black, dangling object. Darwin's theory would argue that the jackdaws who were quickest to attack someone who picked up any object even remotely like their children would have more of their children survive. Those jackdaws who were, by nature, slower to recognize potential threats, would have fewer children survive. So the next generation would have more jackdaws who attacked immediately, and so forth.
It's important to contrast Darwin's theory of evolution by natural selection with its predecessor: the theory of evolution by acquired characteristics, which was championed by the great eighteenth century French botanist and zoologist Jean Baptiste Lamarck. Lamarck argued that an animal could pass on to its offspring characteristics it was forced to develop because of constraints it experienced in its environment. As a famous example, the giraffe began as an animal with a normal length neck. As the leaves at the lower levels of a tree were exhausted, giraffes had to stretch to reach leaves higher up. That stretched neck was passed on to the next generation, which in turn stretched further. Eventually the long-necked giraffe emerged.
In contrast, Darwin argued that nothing an animal could do to change itself physically could be passed on to the next generation. Though the original giraffes may have been short-necked, they did undoubtedly vary somewhat in the length of their neck. Because of mutations which occur in any generation of new young (which we now know are a normal part of DNA replication), some giraffes may have especially long or short necks. Those giraffes with longer necks were able to reach more food and so more likely to survive and have children. Of those children, some had longer necks that others and again were more likely to survive and breed. Eventually only those with long necks could survive due to the competition with other species who also ate leaves on the lower branches.
As Darwin's theory became almost universally accepted by science, it was natural for late nineteenth century theoreticians of the brain to recognize that "the complexity of the behavior of an animal reflects the complexity of its nervous system and thus that evolutionary changes in anatomy can give rise to new behavior patterns."(21) Only recently though has this recognition led to a scientifically sophisticated model of the brain. Nobel prize winner Gerald M. Edelman adds a major twist to this understanding with his theory of Neural Darwinism, which goes a long way toward explaining the remarkable ability of the human brain to adapt to a variety of circumstances.
In brief, Edelman looks at the growth and development of an individual brain as a process of natural selection. Because of the evolutionary history of the human brain, it already contains incredible numbers of groups of interconnected neurons. A rough estimate is one hundred billion neurons, connected into groups, with each such neuron group including anywhere from 50 to 10,000 neurons connected in highly complex ways. Our genes determine the initial connections of neurons into neuron groups. During the gestation period, the growth and development of the brain of the fetus is accompanied by a selection of a perhaps a million such neuron groups. He calls this the primary repertoire; in other words, the possibilities which are wired-in at birth. But such hard-wired behavior is hardly sufficient to deal with the variety of the world. As Edelman argues:
…An individual animal endowed with a richly structured brain must also adapt without instruction to a complex environment to form perceptual categories or an internal taxonomy governing its further responses to the world.(22)
Remember Lorenz' contrasting examples of how goslings and mallards decide who is "mother?" Goslings imprint on the first living creature they see at birth, while mallards wait until they hear something that sounds like a mallard's honk. Both are strategies for adapting to the variety of situations they might encounter at birth. Both are built-in genetically.
But as soon as these innate behaviors have kicked in, experience starts modifying the brain. For example, at birth both of a human baby's eyes are connected to all the neurons in the brains' visual cortex. But experience forces each eye to select neural connections for itself, effectively taking them away from the other eye. Since we use both eyes equally, eventually each has about half the pathways, with the resulting neuron groups looking nothing like what they were at birth. If for some reason, one eye was covered continuously during that developmental stage, the other eye would have all the neural pathways. That would mean that the eye that had been covered could still see the world, but the information would never get to the brain.
In the broad view across taxa, behavior is remarkably diverse and its relation to neural structure appears to be almost capricious. To wiggle the tail of a worm may take a network of thousands of neurons but to flick the tail of a fish only one.(23)
Why is this? Shouldn't evolution gradually make the brain's processes more and more efficient over time? In fact, most digital computer models of the brain view the programs it stores as logical algorithms, i.e., "a step-by-step problem-solving procedure, especially an established, recursive computational procedure for solving a problem in a finite number of steps."(24) Philosopher of science Larry R. Vandervert provides the most extreme view of this model with his theory of "neurological positivism." He views evolution as refining the algorithms of the brain to ever more perfect functionality, so that the brain, mind and world form a unity in which it is possible to transform any one into the other through pre-determined transformational rules.
This is a lovely model which may be largely true except for its assumption that evolution keeps honing the algorithms to every-increasing perfection. The brain doesn't form its programs this way, since in any generation, animals who have to come up with new behaviors to fit new situations draw on the existing structures of the brain, selecting whatever works, then modifying it as is needed.
. . . the brain is not constructed to think abstractly -- it is constructed to ensure survival in the world. It has many of the characteristics of a good engineering solution applied to mental operation: do as good a job as you can, cheaply, and with what you can obtain easily. If this means using ad hoc solutions, with less generality than one might like, so be it. We are living in a particular world, and we are built to work in it and with it.(25)
Another short quote from Edelman brings this all together: "A central assumption of the theory is that perceptual categorization must both precede and accompany learning."(26) We have already built-in many more brain structures than we will ever use. During gestation, we select a healthy-sized group of those structures to draw on in life. After birth we start connecting those together and modifying them as we adapt to our environment. The brain grows large and more complex; by the time we reach maturity, the human brain is four times bigger than it was at birth and "no two individual animals are likely to have identical connectivity in corresponding brain regions."(27) Each little gosling is able to follow Lorenz, behaving for all the world knows in identical fashion. Yet each is unique.
Now from the above, we can already see another major problem with the digital computer model of the brain. In digital computers, programs, once written, remain unchanged. Perhaps there is some degradation in the storage and retrieval process, but that invariably makes the program no longer run correctly; it can't in any way be viewed as analogous to mutations. In nature, we find in contrast that in many cases a behavior that fits one circumstance can adapt itself to a new circumstance. If the brain is a storehouse of programs, some stored permanently across generations, others only during the course of a person's life, those programs must be something different than the programs stored in a digital computer. Much more like the neuron groups Edelman presents.
Long before we knew much about the actual structure of the brain, psychologist William James already realized that the brain exists as part of the world, not as a compilation of logical, immutable algorithms.
Mental facts cannot properly be studied apart from the physical environment of which they take cognizance…our inner faculties are adapted in advance to the features of the world in which we dwell…Mind and world in short have evolved together, and in consequence are something of a mutual fit.(28)
In the neural network model of the mind, thinking is not following a set of rules but a product of the complex interactions of huge numbers of neurons.
- William F. Allman. (29)
Earlier we presented the computer model of the brain, in which things were accomplished hierarchically: an executive decision (by consciousness), was supervised by a foreman level, who in turn had workers accomplish the tasks. The neurological equivalent of the workers are "hard-wired" connections between, for example, different receptors in our eyes and neurons in the brain. Groups of neurons are specialized in order to solve specific tasks; e.g., detecting edges, separating various curves, making color discriminations, and so forth. These hard-wired connections are essentially specialized workers who are called into action when we consciously decide to focus our attention somewhere. They are the parts of the brain most like digital computer programs.
Most of these hard-wired connections are directly described by our genes. There are living organisms simple enough that every single neural connection is described genetically. But for most animals, and certainly for human beings, our genes hardly begin the task of making such connections. For example, we have already said that at birth both of a baby's eyes are connected to all of the neurons in the visual cortex of the brain. Then experience intensifies some connections and eliminates others until each eye has about half the connections. This is much more efficient than having to describe all those connections genetically. So we end up with specialized workers in the brain, but they didn't have to be fully described genetically. This method of "writing programs" is similar to how simple programs often referred to as "macros" are created within word-processing, spreadsheet or data base programs. Basically, you notify the computer to begin recording a macro, then you perform a series of tasks, then turn off the recording when you're done. The resulting macro program is stored and available for later use.
But much more of the brain's solutions to problems don't involve such hard-wiring at all. Nor do they involve the hierarchical, logical solutions of the digital computer model. Instead "best-fit" solutions are arrived at in a way that seems like heresy to those who believe in a strictly cognitive model of the brain. "Computers are typically given a list of instructions to follow. A neural network, on the other hand, is "taught" through a series of examples."(30) This new way of viewing the brain has been termed the neural network model.
A good metaphor for how this model works is a small town meeting. Imagine that everyone in the town has gathered in the town hall to discuss some problem that concerns everyone to a greater or lesser extent. Let's say that they are considering changing the zoning laws so that some land previously zoned for residence only is now going to be opened up for commercial interests. Since this is a small town, everyone in the room has some connection to everyone else, but the strength of those connections may vary considerably. For example, some people may be relatives, others friends, others acquaintances, still others business colleagues. At the start of the session, some people will probably already have definite feelings one way or the other about the issue. For example, the proposition will undoubtedly have been presented by businessmen who want to make use of this land. They will already have made connections with others to try and win them over to their point-of-view. In contrast, those with homes near the area under discussion may be strongly opposed to having it opened up for commercial use. They will probably have already talked this over with some friends as well.
So as the meeting begins, there are already some people with strong feelings far and against the proposition. But it's likely that most people won't yet have formed any opinion. When the meeting begins, the two opposing sides present their cases. As they do, those who were initially indifferent to the issue begin to lean one way or the other. They may do this on the basis of the issues or simply on the basis of their personal or business connections with someone on either side of the issue. Remember that the strength of the connections between people will vary considerably. As the discussion continues, there will be rising and ebbing tides as one side or the other gains temporary ascendence. Gradually the discussion will begin to die down as everyone eventually comes to a decision. A consensus will emerge and the new zoning will either be accepted or turned down.
When everyone leaves the meeting, not only is the issue solved, but the strength of their connections with each other may have changed during the session, not only on this issue but in general. Someone may realize that they really should spend more time with some person who impressed them. Or they may have formed an antipathy to another person simply on the basis of what they said or how they behaved in the meeting. No one could have predicted in advance exactly who would end up for and against the proposal and how strongly they would feel about it. The final outcome is a unique solution to the original problem, one that would be impossible to logically dissect.
This "town meeting" model is a close approximation to how the brain solves many problems. Just imagine that the neurons in the brain are the townspeople and the strength of the connections between various neurons varies just as the townspeople vary in their feelings of attachment to each other. At any point in time, when some new solution needs to be created by the brain, the neurons are in various stages of activation, just as the townspeople were at the start of the meeting. As impulses flow between the various connections, some neurons turn on and some turn off, depending on the strength of the signal received and other particulars we can ignore in this discussion. Eventually, a new stability arises which leads to a resolution of the original issue, whether it was pattern recognition or a need for action.
In 1982, John Hopfield, a physicist at Cal Tech, proved mathematically that a network of simplified neurons acting in this way, can process information and solve problems.(31) Logic wasn't needed. Hierarchies weren't needed. The brain can make use of the neuron groups it already has present in order to form new connections and arrive at a generalized solution.
In order to model how this actually works in practice, neuroscientist Gary Lynch collaborated with computer scientist Richard Granger in developing a computer model of a 500 neuron network to see if it could learn to discriminate between different smells. They had two different smell groups: "cheese" and "flower," with variations of each. At first the neural net came up with a unique pattern for each smell, but eventually, after it was exposed to the individual odors more times, the neural net produced one pattern for the cheeses and another for the flowers. Without any instruction or any logic, it was able to generalize from the specific examples to the group to which they belonged. If they exposed it to any of the "cheese" smells, for example, it would always identify "cheese."
But something even more startling happened: as they continued to give it more sniffs of each particular cheese, at some point, the general pattern disappeared and the neural net then produced specific patterns for the various cheeses, within the general pattern. Now the network could not only discriminate "cheeses" from "flowers," but also different varieties of "cheeses." Granger said that:
We're thrilled with it. With the first sniff, it recognizes the overall pattern and says, "it's a cheese." With the next sniffs, it distinguishes the pattern and says, "It's Jarlsberg."(32)
This model which discriminates different smells is a perfect example of a neural net, as the neuron groups formed are essentially random and unpredictable in advance. If the same model were wiped clean of its memory, then trained again, it would again learn to discriminate the smells, but it would do so with a totally different set of connections in the network. Lynch says "Smell isn't spatial or temporal. It doesn't exist in a dimensional world. It's like pure thought."(33) We will return to that idea later.
This is especially interesting because mammals were the first animals to develop a keen sense of smell. Developing a keen sense of smell may have been what made first the mammalian brain, then later the primate brain (the cortex) grow in size. Lynch theorizes that when the dinosaurs disappeared, the mammals needed better vision as well as better smell. That produced a different type of neural connection. For example, the brain deals with vision by a combination of both the digital computer solution and the neural net solution. Some areas of vision work like directly wired algorithmic problems solvers, with specific neurons for specific tasks. Other parts work like smell, with no particular wiring necessary. And, in fact, we can train computer versions of neural nets to perform complex visual tasks, such as detecting varieties of curves, much better than the same task can be done simply by direct connections of particular neurons to particular parts of the brain.(34)
Let's call these neural net solutions "natural programs" to distinguish them from "algorithmic programs" such as the digital computer model of the brain requires. Natural programs have practical advantages over algorithmic programs and nature always likes to improve its odds of success. Because natural programs are stored in an overall pattern, the more times the pattern is accessed from various input situations, the stronger it becomes. Because it is a pattern spread across a wide number of neurons, even if a substantial portion of the neurons are damaged or even destroyed, the pattern can usually still be reproduced from those that remain. If the newly reproduced pattern is less than perfect, it will get better again as it is reused.
Before we leave this section, it is important to emphasize that the neural net solutions of the brain take place between the receptors and the neocortex, the human brain. The connections to, from and largely within the mammalian and reptile brains are directly wired, inherited, beyond change due to experience.
Let's take stock of what we've already discovered about the brain and see to what extent it begins to point toward the unitary world we promised in the introduction, the world in which knowledge emerges from the inside out.
First, we found that possess not one brain but three, each representing a different stage of evolutionary history and each largely dealing with operations specific to itself. Though we think of ourselves largely as rational beings who make conscious decisions about our lives, a great deal of our behaviors are already stored at birth in the older reptile and mammalian brains. We looked at some examples from ethologist Konrad Lorenz of how the mother/child relationship, courting behaviors and recognition of danger are built-in for several different species of birds. But quite obviously, such behaviors are also built-in for dogs and cats and monkey--and humans. Though the development of the neocortex allows us to exercise much more control over those inherited actions than, for example, Lorenz' jackdaws, who attack anything that is black and moves in a certain way, we would do well to realize just how much of our actions are determined directly by those more primitive brains within us. Then the human brain comes along after the fact and thinks up supposed rational reasons why we were driven by irrational behaviors.
We then saw how evolution operates within the structure of those three brains. Each brain, especially the two oldest, can be regarded as a repository of solutions to problems once presented first to species that preceded humans, then later adapted by humans. We've seen that the brain's structure tends to be composed of "good enough" solutions to problems. Where a species needs some special ability specific to itself , such as a frog's need to quickly spot moving bugs, or a human being's need to recognize a wide variety of faces, a new brain structure forms gradually due to evolutionary pressure. But otherwise, if something worked for a lizard that later evolved into a bird, there is no need for the structure to change. If something specific in the bird's environment requires a change in its behavior, the change is likely to be a modification of the original solution rather than something radically new, developed specifically for birds.
Further, there is no evolutionary need for even out-of-date neural structures to disappear, unless they conflict with necessary structures. Doesn't Jung's concept of archetypal structures begin to seem a little less strange, given how the brain has developed? Let's take an example: I think most of us would accept that falling in love must have some such archetypal structure. After all, it happens in very similar ways for all humans in all cultures. In that respect we aren't much different that the jackdaw who fell in love with Lorenz and tried to feed him worms. Human cultures have a wide variety of ways in which that courtship behavior is expressed, so we know that the specific behaviors are not hard-wired, as they are for the jackdaw. But the similarities of behavior are more striking than the cultural differences, or for that matter, even the species differences.
With the increased development of the visual centers of the neocortex, those "good enough" behavioral structures, undoubtedly began to form neural linkages with visual structures to provide images that matched to those behaviors, so that archetypes developed two faces: image and behavior. So archetypes are a reasonable way to describe the way the brain appears to be structured.
Within limits, the digital computer model of the brain is a fair start at an approximation to the way these hard-wired archetypal structures are stored and function. Just as a computer can load special purpose programs as needed, the brain can call upon special purpose "programs" when it needs them. Some of these programs are written by evolution and stored in the brain's structure at birth, some are learned in the course of our development. These learned but still directly wired programs are probably best exemplified by, for example, the connections between some of the receptors in the eyes and the neurons in the visual centers of the brain.
But we also saw that with the increased development of the neocortex in humans (and the higher primates), a second type of brain structure appeared, which can best be represented by neural nets. Rather than a hard-wired solution, the brain instead gradually comes to a solution which is spread widely over the brain's structure (and here, by brain, we are largely talking of the "human brain," the neocortex). We presented a "town hall" model of how this process might operate. Later in the book, when we discuss chaos theory, we will present this theme again in terms of "attractors."
Even more than the hard-wired solutions created by evolution, these "natural programs" (as we termed them), are "just-so" stories. There is no logical necessity for a particular structure to emerge as a solution to a problem; since the solution is spread widely over the structure of the brain, many other possibilities would do just as well.
The neural net model also offers a possibility for the unus mundus, the unitary world, to extend past the storage within a single human being. Clearly this method of solving problems and storing memories over an entire structure is efficient or nature would never have developed it. At this point, we don't know exactly how it operates, except that it is global rather than local.
Neurophysiologist Karl Pribram was the first to notice that this ability of the brain to store solutions and memories over its entire structure is similar to a hologram, in which a three-dimensional picture may be stored by interference patterns on film. He knows believes that there are two storage structures going on simultaneously in the brain, one more localized, defined through connections between neurons, another global through the astonishingly numerous dendritic connections within the brain. We will address his theory at more length later, but what is important is that he thinks that the holographic brain is itself a part of a holographic universe. Just as the brain is able to store and access information over its total structure, the universe is able to store and access information over every part of itself, including each human being. In that respect, Pribram has this to say:
It isn't that the world of appearances is wrong; it isn't that there aren't objects out there, at one level of reality. It's that if you penetrate through and look at the universe with a holographic system, you arrive at a different view, a different reality. And that other reality can explain things that have hitherto remained inexplicable scientifically: paranormal phenomena, synchronicities, the apparently meaningful coincidence of events.(35)
Synchronicity, the "apparently meaningful coincidence of events," as Pribram puts it, is the topic of our next chapter.
1. Anon., "The Thunder Perfect Mind," in James M. Robinson, ed., The Nag Hammadi Library (San Francisco: Harper & Row, 1988), p. 302.
2. Gary Zukav, The Dancing Wu Li Masters (New York: Bantam, 1979), p. 92.
3. Marc Jeannerod, The Brain Machine: The Development of Neurophysiological Thought (Cambridge, Massachusetts and London, England: Harvard University Press, 1985) , pp. 84-5.
4. "Gray's Theory Incorporates Earlier Evolutionary Model of 'Triune Brain,'" Brain/Mind Bulletin (March 29, 1982), p. 4.
5. Richard Adams, Watership Down (New York: MacMillan, 1972).
6. Carl Sagan, The Dragons of Eden: Speculations of the Evolution of Human Intelligence (New York: Ballantine Books, 1977), p. 58.
7. Charles Hampden-Turner, Maps of the Mind (New York: MacMillan, 1981), p. 82.
8. C. P. Snow, "The Two Cultures and the Scientific Revolution,"(1959), in C. P. Snow, Public Affairs (New York: Charles Scribner's Sons, 1971), pp. 13-46.
9. C. P. Snow, "The Two Cultures: A Second Look,"(1963), in C. P. Snow, Public Affairs (New York: Charles Scribner's Sons, 1971), p. 62.
10. Elmer Green in personal conversation with Tony Schwartz, in Tony Schwartz, What Really Matters ( New York: Bantam Books, 1995), p. 188.
11. quoted in the preface to Konrad Lorenz, King Solomon's Ring (New York: Time Incorporated, 1952), p. xxi.
12. James A. Anderson and Edward Rosenfeld, eds., Neurocomputing: Foundations of Research (Cambridge: MIT Press, 1988), p. 2.
13. Konrad Lorenz, King Solomon's Ring, p.47.
14. Konrad Lorenz, King Solomon's Ring, pp. xxiv-xxv.
15. Konrad Lorenz, King Solomon's Ring, p. 48.
16. Konrad Lorenz, King Solomon's Ring, pp. 153-4.
17. Konrad Lorenz, King Solomon's Ring, pp. 157-9.
18. Albert Einstein, "Motiv des Forschens," 1918, in Gerald Holton, Thematic Origins of Scientific Thought (Cambridge, Harvard University Press, 1973), pp. 376-7.
19. Marc Jeannerod, The Brain Machine: The Development of Neurophysiological Thought (Cambridge, Mass: Harvard University Press, 1985), p.2.
20. Gerald M. Edelman, Neural Darwinism (New York: Basic Books, 1987), p. 25.
21. Gerald M. Edelman, Neural Darwinism (New York: Basic Books, 1987), p. 10.
22. Gerald M. Edelman, Neural Darwinism, pp. 8-9.
23. Gerald M. Edelman, Neural Darwinism, p. 23.
24. American Heritage Talking Dictionary (New York: The Learning Company, Inc., 1997).
25. James Anderson & Edward Rosenfeld, editors, Neurocomputing: Foundations of Research (Cambridge, Massachusetts and London, England: the MIT Press, 1988), p. 1.
26. Gerald M. Edelman, Neural Darwinism, p. 7.
27. Gerald M. Edelman, Neural Darwinism, p. 5.
28. quoted in James Anderson & Edward Rosenfeld, Neurocomputing: Foundations of Research, p. 1.
29. William F. Allman, Apprentices of Wonder: Inside the Neural Network Revolution (New York: Bantam Books, 1989), p. 22.
30. William F. Allman, Apprentices of Wonder, p. 12.
31. J. J. Hopfield, "Neural networks and physical systems with emergent collective computational abilities," Proceedings of the National Academy of Sciences 79 (1982), pp. 2554-8, reprinted in James A. Anderson and Edward Rosenfeld, Neurocomputing: Foundations of Research, pp. 460-4.
32. William F. Allman, Apprentices of Wonder, p. 76.
33. William F. Allman, Apprentices of Wonder, p. 77.
34. William F. Allman, Apprentices of Wonder, pp. 76-7.
35. Psychology Today.
Return to Home Page