Site hosted by Angelfire.com: Build your free website today!

Chapter 4 - Animal Consciousness and Higher Mental States

Back to main chapter page

Part A - Phenomenal Consciousness in Animals

Appendix A Chapter 4 part B Chapter 4 part C Bibliography

In what follows, I shall use the term phenomenal consciousness as Block (1997) does, to denote states with a subjective feel, which can be immediately recognised if not defined. Van Gulick (2004) prefers to use the term qualitative consciousness for subjective feelings or qualia (such as the experience of seeing red), and defines phenomenal consciousness in a richer sense, as including the overall structure of experience, in addition to sensory qualia.

Although scientists do not use the term "phenomenal consciousness", they employ a closely related term, primary consciousness, which "refers to the moment-to-moment awareness of sensory experiences and some internal states, such as emotions" but excludes "awareness of one's self as an entity that exists separately from other entities" (Rose, 2002, p. 6). The main criterion used by scientists to verify the occurrence of primary consciousness in an individual is his/her capacity to give an accurate verbal or non-verbal report on his/her surroundings.

As some non-human animals can give non-verbal reports of events in their environment, the relation between phenomenal and primary consciousness is therefore philosophically significant for the purposes of this thesis.


4.A.1 What philosophers have to say about animal consciousness

4.A.1.1 Philosophical distinctions regarding consciousness

Many contemporary philosophers argue that the question of which animals possess phenomenal consciousness can only be answered by carefully distinguishing it from other notions of "consciousness". Table 4.1 lists some of the more commonly cited distinctions.

My research on the subject of consciousness has led me to conclude that only one of the philosophical distinctions drawn between the various forms of consciousness is of any help in resolving the Distribution Question (Which animals are phenomenally conscious?). The distinction between transitive creature consciousness and phenomenal consciousness is relevant to the Distribution Question; the other philosophical concepts of consciousness lack relevance, for one or more of the following reasons:

(a) although they may help to sharpen our philosophical terminology, they have no bearing on the Distribution Question;

(b) they are poorly defined;

(c) they are inapplicable to non-human animals;

(d) they fail to "carve reality at the joints" as far as animal consciousness is concerned - that is, they apply to too many animals (including animals that cannot plausibly be described as phenomenally conscious) or too few (e.g. humans and great apes only).

During the course of my research, I also found that the distinction between transitive and intransitive creature consciousness, which was supposed to be purely conceptual, turned out to be a real distinction.

I also discovered that what appeared to be a robust nomic connection between wakefulness (defined according to brain-based criteria) and phenomenal consciousness, had been entirely overlooked by philosophers, because of a conceptual distinction they had already formulated between these two notions of consciousness.

Finally, in Table 4.2 below, I propose three concepts of consciousness that I have uncovered in the scientific literature, which (I believe) do a better job of "carving reality at the joints" as far as animals are concerned than existing philosophical categories. Regrettably, these concepts of consciousness are almost entirely ignored in the current philosophical literature relating to consciousness.


Table 4.1 - Various philosophical usages of the term "consciousness".
Based on Rosenthal (1990, 2002); Dretske (1997); Block (1995, 1997, 1998, 2001); Carruthers (2000, 2004); Lurz (2003); and Van Gulick (2004).

N.B. For additional distinctions between different kinds of consciousness, see Van Gulick (2004): http://plato.stanford.edu/entries/consciousness/
Term Definition Comments
1. Creature consciousness Consciousness as applied to a living organism (e.g. a bird).

(N.B. The distinction between creature consciousness and state consciousness was first suggested by Rosenthal (1986).)

My verdict: The distinction between creature consciousness and state consciousness is philosophically significant, but has no bearing on the question of which animals are phenomenally conscious.
2. State consciousness Consciousness as applied to mental states and processes (e.g. a bird's perception of a worm). My verdict: See comments for creature consciousness above.

Additional comments: Ned Block (1997) has criticised the concept of state consciousness as a mongrel concept, and proposed a distinction between two different types of state consciousness: access consciousness and phenomenal consciousness. See comments below.

Selected varieties of creature consciousness
Intransitive creature consciousness Being awake as opposed to asleep (e.g. a bird possesses intransitive creature consciousness if it is awake and not asleep or comatose). My verdict: 1. Poorly defined concept. Assumes that "wakefulness" and "sleep" can be defined uniformly across different classes of creatures. Fails to distinguish between two very different criteria for wakefulness and sleep in animals - behavioural criteria, which are satisfied to some degree by nearly all animals, and brain-based criteria, which are only satisfied by mammals and birds (Shaw et al., 2000).

2. The philosophical distinction between phenomenal consciousness and wakefulness is actually counter-productive to attempts to resolve the Distribution Question, as it overlooks what appears to be a nomic connection between an animal's satisfying the brain-based criteria for wakefulness and its being able to give an accurate report of its surroundings - which is how scientists routinely assess the presence of consciousness. There is a sharp contrast between the EEG patterns of human patients in states of global unconsciousness (deep unconscious sleep, coma, PVS, general anaesthesia and epileptic states of absence) and the EEG of patients in a state of waking consciousness, who are able to give "accurate, verifiable report" of events in their surroundings (Barrs, 2001, p. 35). Additionally, everyday experience shows that no matter how hard we try, we cannot rouse a sleeping person to brain wakefulness without thereby making her (a) alert to her surroundings (primary-conscious) and (b) phenomenally conscious. Moreover, some neuroscientists believe brain sleep to be intimately related to phenomenal consciousness (Cartmill, 2000; Baars, 2001; White, 2000), and some (Baars, 2001; Cartmill, 2000) have even suggested that wakefulness - defined according to brain criteria - is a reliable indicator of phenomenal consciousness across all animal species. The connection between having a brain that is awake and being phenomenally conscious may well turn out to be nomic in animals.

3. Intransitive consciousness defined according to behavioural criteria can exist in the absence of phenomenal consciousness, as shown by the condition of persistent vegetative state (PVS), which has been defined as "chronic wakefulness without awareness" (JAMA, 1990). I describe this condition in the Appendix. PVS patients display a variety of wakeful behaviours, all of which are generated by their brain stems and spinal cords. Studies have shown that activity occurring at this level of the brain is not accessible to conscious awareness in human beings (Rose, 2002, pp. 13-15; Roth, 2003, p. 36).

Conclusion: The term intransitive creature consciousness is inadequate as it now stands.

Additional Notes: Animal sleep that also satisfies electrophysiological criteria is called true or brain sleep. Brain sleep is defined by various criteria, including: EEG patterns that distinguish it from wakefulness; a lack of or decrease in awareness of environmental stimuli; and the maintenance of core body temperature (in warm-blooded creatures) (White, 2000). Also, there is a massive contrast between the EEG patterns of human patients in states of global unconsciousness (deep unconscious sleep, coma, persistent vegetative state, general anaesthesia and epileptic states of absence) and the EEG of patients in a state of waking consciousness (Shaw et al., 2000).

Transitive creature consciousness Consciousness of objects, events, properties or facts (Dretske, 1997). Example: a bird's consciousness of a wriggling worm that looks good to eat. Also called perception. My verdict: 1. Poorly defined concept. Transitive creature consciousness, in its broadest sense, could be said to be a property of all cellular organisms, as they all possess senses of some sort (see chapter two, part B). In a narrower sense, the term applies to all organisms with a nervous system, as they possess "true" senses (Cotterill, 2001).

2. Research shows that an animal can possess transitive consciousness in the absence of phenomenal consciousness (pace Dretske, 1997). The vomeronasal system, which responds to pheromones and affects human behaviour, but is devoid of phenomenality (Allen, 2004a, p. 631) is one good example; the phenomenon of blindsight in humans and monkeys (Stoerig and Cowey, 1997, pp. 536-538; p. 552) is another.

3. Transitive and intransitive creature consciousness are not co-extensive in nature; hence the distinction between them is real, and not merely conceptual. A few animals (e.g. alligators) show no sign of behavioural sleep. Additionally, the term "behavioural sleep" has been defined for animals but not for bacteria, protoctista, plants, or fungi (Kavanau, 1997, p. 258). Yet all of these creatures could be said to possess transitive consciousness to some degree.

Conclusion: The criteria for possession of transitive creature consciousness need to be more clearly specified. For instance, blindsight varies across patients in its degree of severity, and the specificity of the responses shown by these patients varies accordingly (Stoerig and Cowey, 1997, pp. 536-538). Which of these responses count as bona fide instances of transitive creature consciousness?

Selected varieties of state consciousness
Access consciousness According to Block, "a representation is access-conscious if it is actively poised for direct control of reasoning, reporting and action" (1998, p. 3). Direct control, according to Block, occurs "when a representation is poised for free use as a premise in reasoning and can be freely reported" (1998, p. 4). Elsewhere, Block (1995) stipulates that an access-conscious state must be (i) poised to be used as a premise in reasoning, (ii) poised for rational control of action, and (iii) poised for rational control of speech.

Block (2001) now prefers to use the term global access instead of access consciousness.

My verdict: 1. Because access consciousness presupposes rationality in a fairly explicit sense, it is doubtful whether it applies to any non-human animals. Block's claim that not only do some non-linguistic animals (e.g. chimps) have access consciousness states (1995, p. 238), but "very much lower animals" are access-conscious too (1995, p. 257) is therefore puzzling.

2. Access consciousness can exist in the absence of phenomenal consciousness, in certain situations. The strongest evidence for this claim comes from recent studies of the mammalian visual system (discussed in Carruthers 2004b). Research by Milner and Goodale (1995) suggests that each human brain has two visual systems: a phenomenally conscious system that allows the subject to select a course of action but which she cannot attend to when actually executing her movements, and an access-conscious system that guides her detailed movements but is not phenomenally aware. However, these findings relate to just one sensory modality (sight) and only apply to a limited class of animals (mammals).

The case of the distracted driver, who is supposedly able to navigate his car home despite being oblivious to his visual states, is not a convincing example of access consciousness in the absence of phenomenal consciousness (Wright, 2003). See Appendix.

Conclusion: Block's notion of access consciousness is a philosophically useful one. Nevertheless, it is of no use in helping us to answer the Distribution Question: it is, if anything, even more cognitively demanding than phenomenal consciousness, as it occurs only when an internal representation is "poised for free use as a premise in reasoning and can be freely reported" (1998, p. 4). To answer the Distribution Question, we need to define a "weaker" notion of consciousness that many animals could plausibly be said to satisfy even if they lacked phenomenal consciousness.

Additional comments: Rosenthal (2002) faults Block's definition of access consciousness, on the grounds that one's ability to rationally control one's actions does not require consciousness of any kind. He finds Block's new definition equally problematic: global access is neither necessary nor sufficient for consciousness.

Phenomenal consciousness Block (1995) defines phenomenally conscious states as states with a subjective feel or phenomenology, which we cannot define but we can immediately recognise in ourselves.

Recently, Block (2001) has forsworn the term "phenomenal consciousness" in favour of what he calls phenomenality.

Van Gulick (2004) defines phenomenal consciousness more narrowly than Block: it applies to the overall structure of experience and involves far more than sensory qualia (raw subjective feelings, such as the experience of seeing red).

My verdict: 1. The question of which animals are phenomenally conscious will most likely be answered by neurologists. At present, neither scientists nor philosophers can agree on what phenomenal consciousness is, how it first arose in organisms, or even what it is for (i.e. what function it serves). However, there is a broad scientific consensus on the neurological conditions for consciousness (see below).

2. It is likely that human beings are capable of having phenomenally conscious experiences without access consciousness, due to lack of attention or rapid memory loss. In his discussion of the refrigerator that suddenly goes off, Block cites "the feeling that one has been hearing the noise all along" as evidence for inattentive phenomenality (1998, p. 4). The most straightforward way of explaining this case is the hypothesis that "there is a period in which one has phenomenal consciousness of the noise without access consciousness of it" (1998, p. 4).

Additional comments: Rosenthal (2002) has criticised Block's (2001) account of phenomenal consciousness for its ambiguity between two very different mental properties, which Rosenthal refers to as thin phenomenality (the occurrence of a qualitative character without a subjective feeling of what it's like) and thick phenomenality (the subjective occurrence of mental qualities). Rosenthal considers only the latter to be truly conscious.

Reflexive consciousness, also known as introspective or monitoring consciousness. Also known as states one is aware of. According to Block (1995, 2001), a state is reflexively conscious if it is the object of another of the subject's states (e.g. when I have a thought that I am having an experience). Alternatively, "a state S is reflexively conscious just in case it is phenomenally presented in a thought about S" (Block, 2001, p. 215). Similarly, Rosenthal (1986, 1996) defines a conscious mental state as a mental state one is aware of being in.

Conscious states in this sense require the existence of mental states that are about other mental states. "To have a conscious desire for a cup of coffee is to have such a desire and also to be simultaneously and directly aware that one has such a desire" (Van Gulick, 2004).

My verdict: May not be applicable to non-human animals. It has yet to be shown that any non-human animals are capable of reflexive consciousness. Lurz (2003) considers the idea of a non-human animal having thoughts of any kind about its mental states to be highly implausible. (In Lurz's "same-order" account, a creature's experiences are conscious if it is conscious of what its experiences represent - i.e. their intentional object - even if they are not conscious that they are perceiving.)

Philosophers are currently divided over whether awareness of one's mental states is a requirement for having phenomenal consciousness (see Rosenthal, 2002; Dretske, 1995).

Self-consciousness Block (1995) defines self-consciousness as the possession of the concept of the self and the ability to use this concept in thinking about oneself. My verdict: May not be applicable to non-human animals. "As yet, the only evidence that an animal may have an awareness of the 'self' versus awareness of other individuals has been demonstrated in chimpanzees and possibly orang-utans and dolphins" (Emery and Clayton, 2004, p. 41; see also Gallup, Anderson and Shillito, 2002; Reiss and Marino, 2001). This evidence comes from mirror tests.

Some philosophers (Leahy, 1994) question even this evidence, arguing that mirror tests merely indicate that an animal possesses consciousness of its own body, as opposed to true self-consciousness.

Have we raised the bar too high? "If ... [t]he self-awareness requirement ... is taken to involve explicit conceptual self-awareness, many non-human animals and even young children might fail to qualify" (Van Gulick, 2004).


Table 4.2 - Proposed new categories of animal consciousness, which I have uncovered in the scientific literature. The names of the categories are my own.
Term Definition Comments
Integrative consciousness The kind of consciousness which gives an animal access to multiple sensory channels and enables it to integrate information from all of them. Mammals possess this kind of consciousness; snakes appears to lack it:

It seems that a snake does not have a central representation of a mouse but relies solely on transduced information. The snake exploits three different sensory systems in relation to prey, like a mouse. To strike the mouse, the snake uses its visual system (or thermal sensors). When struck, the mouse normally does not die immediately, but runs away for some distance. To locate the mouse, once the prey has been struck, the snake uses its sense of smell. The search behavior is exclusively wired to this modality. Even if the mouse happens to die right in front of the eyes of the snake, it will still follow the smell trace of the mouse in order to find it. This unimodality is particularly evident in snakes like boas and pythons, where the prey often is held fast in the coils of the snake's body, when it e.g. hangs from a branch. Despite the fact that the snake must have ample proprioceptory information about the location of the prey it holds, it searches stochastically for it, all around, only with the help of the olfactory sense organs (Sjolander, 1993, p. 3).
Finally, after the mouse has been located, the snake must find its head in order to swallow it. This could obviously be done with the aid of smell or sight, but in snakes this process uses only tactile information. Thus the snake uses three separate modalities to catch and eat a mouse (Dennett, 1995b, p. 691).
Object consciousness Awareness of object permanence; ability to anticipate that an object which disappears behind an obstacle will subsequently re-appear. Reptiles appear to lack the concept of object permanence:

A snake has no ability to anticipate that a mouse running behind a rock will reappear. Cats and other predatory mammals are able to anticipate that the prey will reappear (Grandin, 1998).
Anticipatory consciousness Ability to visually anticipate the trajectory of a moving object. Mammals can "lead" moving prey they are attacking by anticipating their trajectories - an ability that depends on their visual cortex (Kavanau, 1997, p. 255). Pigeons also possess this ability (Wasserman, 2002, p. 180). There is no evidence that fish and amphibians possess this ability.


4.A.1.2 An outline of the current philosophical debate on animal consciousness

The contemporary philosophical debate about animal consciousness is split into several camps, with conflicting intuitions regarding the following four inconsistent propositions (Lurz, 2003):

1. Conscious mental states are mental states of which one is conscious.
2. To be conscious of one's mental states is to be conscious that one has them.
3. Animals have conscious mental states.
4. Animals are not conscious that they have mental states.

Common to all of the above positions is an underlying assumption: that the difference between phenomenally conscious mental states and other states can be formulated in terms of concepts which already exist within our language. This assumption may be turn out to be wrong: we may require new linguistic terminology to formulate this distinction properly.

I would suggest that the "original sin" of philosophers who have formulated theories of phenomenal consciousness was to suppose that the requirements for subjectivity could be elucidated through careful analysis. Now, an analytical approach might work if we had a good idea of what consciousness is, or why it arose in the first place, or what it is for. In fact, we know none of these things. Table 4.4 below illustrates this point: it lists a selection of theories regarding why consciousness exists.

Although a scientific consensus on the "why" of consciousness remains elusive, there is an abundance of neurological data relating to how it originates in the brain. In section 4.A.2, I review what scientists have discovered about primary consciousness, and I discuss the neurological requirements for consciousness.

Table 4.3 - Key positions in the contemporary philosophical debate on "consciousness" (Lurz, 2003)
School of thought Description of school's position Comments
Higher-order representational (HOR) theories of consciousness Accept propositions 1 and 2, and either 3 or 4. HOR theorists argue that a mental state (such as a perception) is not intrinsically conscious, but only becomes conscious as the object of a higher-order state. Higher-order states are variously conceived as thoughts (by HOT theorists) or as inner perceptions (by HOP theorists). Dretske (1997) objects that HOR theories fail to explain the practical function of consciousness and thus effectively marginalise it. More recently, higher-order theorists have formulated their own proposals regarding the function of consciousness (see Carruthers, 2000). For an overview of theories of the function of phenomenal consciousness, see Table 4.4 below.
Exclusive HOR theorists (Carruthers (2000, 2004) Accept propositions 1, 2 and 4 but reject 3 - that is, they allow that human infants and non-human animals have beliefs, desires and perceptions, but insist (Carruthers, 2000, p. 199) that we can explain their behaviour perfectly well without attributing conscious beliefs, desires and perceptions to them. My comment: Internally consistent, but almost certainly sets the bar for having phenomenal consciousness too high. The most natural way of explaining the similarity in responses between blindsighted humans and monkeys with blindsight (Stoerig and Cowey, 1997) is to suppose that both lack the same thing: phenomenal consciousness. Recent experiments with binocular rivalry demonstrating that humans and monkeys make identical reports about what they see when conflicting data is presented to their left and right two visual fields (Logothetis, 2003) suggest even more strongly that monkeys experience the world as we do. However, Carruthers could reply that there is no need to postulate higher-order states here: the monkeys simply have fluctuating first-order perceptions, which they have been conditioned to respond to by pulling a lever.

One may disagree with Carruthers' contention that an ability to distinguish between the way things appear and the way they really are is a pre-requisite for phenomenal consciousness, but it is an internally consistent position. (Allen (2002) himself proposes that any animals that can learn to correct their perceptual errors are phenomenally conscious, though he does not make it a necessary requirement.) I argue in the Appendix to chapter 4 part A that the meager experimental evidence available suggests that only human beings meet Carruthers' requirement for phenomenal consciousness.

On the other hand, Carruthers' (2000) positive arguments in support of his theory of the origin of consciousness are rather unconvincing, and have been subjected to a detailed critique by Allen (2004a) (summarised in the Appendix to chapter 4 part A).

Inclusive HOR theorists (Rosenthal, 1986, 2002) Accept propositions 1, 2 and 3 but reject 4. Rosenthal (2002) construes an animal as having a thought that it is in some state. Such a thought requires a minimal concept of self, but "any creature with even the most rudimentary intentional states will presumably be able to distinguish between itself and everything else" (2002, p. 661). My comment: Attributes an implausible level of cognitive sophistication to non-human animals. Rosenthal's (2002) HOT theory requires an animal to have the higher-order thought that it is in a certain state, before the state can qualify as conscious. This is a very strong requirement.

According to HOT theorists, mental states do not become conscious merely by being observed; they become conscious by being thought about by their subject. This means that animals must have non-observational access to their mental states. As Lurz (2003) remarks, this is an implausible supposition for any non-human animal: "it is rather implausible that my cat... upon espying movement in the bushes... is conscious that she sees movement in the bushes, since it is rather implausible to suppose ... that my cat has thoughts about her own mental states".

Lurz's (2003) same-order (SO) account Presents itself as a via media between HOR and FOR. Accepts propositions 1, 3 and 4 but rejects 2. Lurz grants the premise that to have conscious mental states is to have mental states that one is conscious of them, but queries the assumption (shared by HOR and FOR theorists) that to be conscious of one's mental states is to be conscious that one has them. Lurz suggests that a creature's experiences are conscious if it is conscious of what its experiences represent - their intentional object - even if they are not conscious that they are perceiving.

My comment: Sets the bar for having phenomenal consciousness too low. Lurz's criteria could be satisfied by many animals that scientists agree are not phenomenally conscious (see below).

The example given by Lurz (2003) is that of a cat who notices a movement in the bushes and then behaves in a way that warrants our saying that she is paying attention to it. This (according to Lurz) implies that she is conscious of what she is seeing, and hence conscious of what a token visual state of hers represents, and thus in some way conscious of the mental state itself. However, the cognitive requirements that Lurz is imposing on animals are hardly exacting, as they seem to include nothing more than: (a) a capacity for paying attention, which exists in a rudimentary form even in fruit-flies (Van Swinderen and Greenspan, 2003), and (b) a capacity for object recognition, which is also found in honeybees (Gould and Gould, 1988) (see Appendix to chapter two part D).

However, the available neurological evidence suggests that these animals lack the wherewithal for phenomenal consciousness (see below).

First-order representational (FOR) accounts of consciousness (Dretske, 1995) Accept propositions 2, 3 and 4 but reject 1. First order representational (FOR) theorists believe that if a perception has the appropriate relations to other first-order cognitive states, it is phenomenally conscious, regardless of whether the perceiver forms a higher-order representation of it (see Wright, 2003). For example, Dretske argues that a mental state becomes conscious simply by being an act of creature consciousness. Thus an animal need not be aware of its states for them to be conscious.

On this account, consciousness has a very practical function: to alert an animal to salient objects in its environment - e.g. potential mates, predators or prey. However, attention is not a pre-requisite for consciousness: "You may not pay much attention to what you see, smell, or hear, but if you see, smell or hear it, you are conscious of it" (Dretske, 1997, p. 2).

My comment: Sets the bar for having phenomenal consciousness too low. Dretkse's theory appears to have been falsified by empirical evidence, indicating that transitive creature consciousness can occur in the absence of phenomenal consciousness - as illustrated by the vomeronasal system, which responds to pheromones and affects human behaviour, but is devoid of phenomenality (Allen, 2004, p. 631) and the phenomenon of blindsight in humans and monkeys (Stoerig and Cowey, 1997, pp. 536-538; p. 552). There is also a massive body of neurological evidence (discussed below) indicating that phenomenal consciousness can only occur in animals with the right kind of brains: being able to perceive stimuli is not enough.

Additional comments: Lurz (2003) argues against Dretske on linguistic grounds: it is counter-intuitive to say that an animal could have a conscious experience of which it was not conscious. However, this argument overlooks the possibility that there may be different degrees of phenomenality - as shown by phenomena such as peripheral vision, so-called "distracted driving" and change blindness (for a discussion, see Hardcastle, 1997; Wright, 2003).

A better argument against Dretske is that while some of a conscious animal's experiences may well be first-order states as he proposes, it would be improper to describe the creature as phenomenally conscious if all of its experiences were of this sort.

Finally, Dretske's (1997) assertion that attention is not required for consciousness is at odds with his argument that consciousness must serve a practical function in promoting an animal's survival. An animal completely lacking the ability to pay attention to a salient stimulus would not survive very long in the wild.


Table 4.4 - Theories of what consciousness is for: a brief overview
1. Consciousness is an epiphenomenon (Huxley).
2. Conscious feelings exist because they motivate an animal to seek what is pleasant and avoid what is painful (Aristotle).
3. Consciousness arose because it enabled its possessors to unify or integrate their perceptions into a single "scene" that cannot be decomposed into independent components (Edelman and Tononi, 2000).
4. Consciousness arose because it was more efficient than programming an organism with instructions enabling it to meet every contingency (Griffin, 1992).
5. Consciousness arose to enable organisms to meet the demands of a complex environment. However, environmental complexity is multi-dimensional; it cannot be measured on a scale (Godfrey-Smith, 2002).
6. Consciousness evolved to enable animals to deal with various kinds of environmental challenges their ancestors faced (Panksepp, 1998b).
7. Consciousness arose so as to enable animals to cope with immediate threats to their survival such as suffocation and thirst (Denton et al., 1996; Liotti et al., 1999; Parsons et al., 2001).
8. Consciousness gives its possessors the advantage of being able to guess what other individuals are thinking about and how they are feeling - in other words, a "theory of mind" (Whiten, 1997; Cartmill, 2000).
9. Consciousness arises as a spin-off from such a theory-of-mind mechanism (Carruthers, 2000).
10. Brain activity (as defined by EEG patterns) that supports consciousness in mammals is a precondition for all their complex array of survival and reproductive behaviour (e.g. locomotion, hunting, evading predators, mating, attending, learning and so on) (Baars, 2001).
11. Activities that are essential to the survival of our species - e.g. eating, raising children - require consciousness (Searle, 1999). It must therefore have a biological role.
12. Animals receive continual bodily feedback from their muscular movements when navigating their environment. Conscious animals have a very short real-time "muscular memory" which alerts them to any unexpected bodily feedback when probing their surroundings. A core circuit in their brains then enables them to cancel, at the last second, a movement they may have been planning, if an unexpected situation arises. This real-time veto-on-the-fly may save their lives (Cotterill, 1997).


4.A.2 Scientific findings regarding consciousness

4.A.2.1 Primary versus "higher-order" consciousness

Neuroscientists commonly distinguish between primary and higher-order forms of consciousness (Edelman, 1989). Both forms, as defined below, qualify as phenomenal in the philosophical sense. In this section, I focus on primary consciousness, as most of the discussion of animal consciousness pertains to this kind of consciousness. The current evidence for secondary or "higher-order" consciousness in non-human animals is summarised in the Appendix to Part B.

Table 4.5 - Different scientific usages of the term "consciousness"
Term: Primary consciousness (also called "core consciousness" or "feeling consciousness").

Definition: "Primary consciousness refers to the moment-to-moment awareness of sensory experiences and some internal states, such as emotions" (Rose, 2002, p. 6).

Relevance to animals: Rose (2002) remarks that "[m]ost discussions about the possible existence of conscious awareness in non-human animals have been concerned with primary consciousness" (2002, p. 6).

Term: Secondary consciousness (also known as "extended consciousness" or "self-awareness").

Definition: "Higher-order consciousness includes awareness of one's self as an entity that exists separately from other entities; it has an autobiographical dimension, including a memory of past life events; an awareness of facts, such as one's language vocabulary; and a capacity for planning and anticipation of the future" (Rose, 2002, p. 6).

Our investigation in this section will focus on the evidence for primary consciousness in animals, as most of the evidence for consciousness in animals pertains to this form of consciousness.


4.A.2.2 Behavioural criteria for consciousness

The standard observational criterion used to establish the occurrence of primary consciousness in animals is accurate report (AR). I summarise the problems associated with this criterion in Table 4.6. I conclude that while evidence of accurate report in animals is highly suggestive, it cannot establish that animals possess phenomenal consciousness. Seth, Baars and Edelman (2005) propose to resolve this deadlock by defining consciousness in terms of neurological criteria.

Are there any other behavioural indicators that might allow us to unambiguously identify primary consciousness in animals? After examining three categories of behavioural indicators - Panksepp's criteria for affective consciousness; the behavioural indicators for conscious pain; and hedonic behaviour in animals - I concluded that the answer was in the negative. While some of the behaviours cited do indicate the occurrence of phenomenal consciousness, positive identification of a phenomenally conscious state cannot be made without either verbally interrogating the subject (as in some forms of accurate report) or checking that the behaviour is regulated by parts of the brain that are associated with phenomenal consciousness (see Table 4.10 below).

As the interrogation of non-human animals is highly problematic (for reasons discussed in Table 4.6), it follows that phenomenal consciousness in animals ultimately has to be defined as a neurological state in order for us to make some headway in identifying it. Behavioural indicators alone are too weak to settle the matter of which, if any, animals are phenomenally conscious. However, the combination of behavioural and neurological evidence constitutes a very powerful case for the occurrence of phenomenal consciousness in these animals.


Table 4.6 - Summary of problems associated with using the behavioural criteria for primary consciousness as an indicator of phenomenal consciousness
What are the behavioural criteria for primary consciousness?

The following criteria used by neurologists to identify primary consciousness in human beings:

From the clinical perspective, primary consciousness is defined by

(1) sustained awareness of the environment in a way that is appropriate and meaningful,

(2) ability to immediately follow commands to perform novel actions, and

(3) exhibiting verbal or nonverbal communication indicating awareness of the ongoing interaction...

Thus reflexive or other stereotyped responses to sensory stimuli are excluded by this distinction (Rose, 2002, p. 6, italics mine).

How are these criteria assessed in humans and other animals?

  • Scientific and medical researchers make use of a standard index to measure primary consciousness:
    "Accurate report" (AR) is the standard behavioral index for consciousness in humans (Seth, Baars and Edelman, 2005, in press).

    In humans reports do not have to be verbal; pressing a button, or any other voluntary response, is routinely accepted as adequate in research" (Baars, 2001, p. 35).
  • Since the criteria for primary consciousness allow for nonverbal communication, they can be applied to at least some non-human animals. For instance, recent experiments by Stoerig and Cowey (1997, p. 552) have shown that a monkey can be trained to respond to a stimulus in its visual field by touching its position on a screen, and to a blank trial (no stimulus) by touching a constantly present square on the screen that indicates "no stimulus". The monkey's ongoing responses fit the requirements for a nonverbal "accurate, verifiable report" (Baars, 2001) indicating "sustained awareness of the environment" (Rose, 2002, p. 6).

    According to Stoerig and Cowey (1997, p. 552), lack of awareness has also been experimentally verified in studies of monkeys with blindsight, a condition in which patients with damage to the visual cortex of the brain lose their subjective awareness of objects in a portion of their visual field, but sometimes retain the ability to make visual discriminations between objects in their blind field. Monkeys with blindsight made the same kinds of non-verbal reports as humans suffering from blindsight.

  • Other ways of measuring AR have been proposed:

    AR [accurate report] can also be tested by naming tasks, which have been reported in a variety of species, including primates, cetacea, and such birds as African Grey Parrots and Budgerigars... For a recent review, see (Griffin and Speck, 2004) (Seth, Baars and Edelman, 2005, in press).
  • Recent research has also shown ways in which an animal could satisfy Rose's second criterion for primary consciousness ("ability to immediately follow commands to perform novel actions"). For instance, some dolphins, after having been trained in an artificial language of 40 "words" - actually hand and arm gestures - can respond correctly to novel combinations of words (Hart, 1996, pp. 74-75; Herman, 2002, pp. 278-279). Sea lions can respond to novel instructions with up to seven signs, asking them, for instance, to bring a small black ball to a large white cone (Schusterman et al., 2002). The ability of Alex, the African grey parrot, to correctly "distinguish quantities of objects, including groups of novel items, heterogeneous collections, and sets in which objects are randomly arrayed" (Pepperberg, 1991), also seems to meet the novelty criterion.
Definitional problems:

  • Seth, Baars and Edelman (2005) acknowledge that non-verbal accurate report is difficult to distinguish from mere sensory discrimination, which is common in animals:

    [B]ehavioural measures risk a slippery slope. In principle, it is difficult to make a distinction between AR and other behavioral indices of sensory categories. The ability to distinguish between, and generalize across, classes of stimuli is extremely widespread in the animal kingdom. It has been demonstrated in mammals, birds, reptiles, fish, and invertebrates including insects; it may even reside in single-celled organisms. Even computers can produce an output that resembles AR, though few scientists would call them conscious on this basis. Further, stimulus categorization can take place unconsciously in humans (Seth, Baars and Edelman, 2005, in press).
  • There are methodological problems associated with applying accurate report to other species, because their ethogram is compatible with the physical response required. While a manually dextrous animal like a monkey can press a button to report what it sees, a fish cannot.

  • The procedure of testing animal awareness by commanding them to perform novel actions (Rose's (2002) second criterion for primary consciousness) is also philosophically problematic. How novel do the actions have to be? ("Raise your right paw and hold it in front of your nose." - I think Fido would flunk this one, although dolphins have shown an impressive ability to imitate human motor acts without requiring any training (Herman, 2002, p. 278).) What if the action is actually a novel combination of simple actions, each of which the animal has rehearsed thousands of times?
Is primary consciousness a necessary condition for phenomenal consciousness?

  • The occurrence of dreams and the inability of some conscious human beings (e.g. newborn babies) to give accurate report suggests that the ability to give an accurate, ongoing report of one's surroundings (primary consciousness) is not a necessary condition for phenomenal consciousness. But dreams are a derivative form of consciousness, whose content depends on what we experience when awake, and the inability of babies to report on their surroundings presumably reflects their lack of motor co-ordination. Even completely paralysed human adults can give accurate report by their eye movements (Seth, Baars and Edelman, 2005).

  • A few neurologists (Panksepp, 1998, 2001, 2003f; Liotti and Panksepp, 2003) consider the accurate report criterion to be too "cognitive", and have developed criteria for what they believe to be a second and more ancient form of consciousness residing in the brain (affective consciousness). I evaluate their criteria in the table below.
Is primary consciousness a sufficient condition for possessing phenomenal consciousness?

  • It would seem natural to infer from the fact that monkeys with blindsight show the same inability to give accurate report in the damaged half of their visual field as humans with blindsight (who profess to be unaware of seeing anything in their affected fields), to the conclusion that normal monkeys, like humans with normal vision, are subjectively aware of what they see.

  • One prominent dissenter is Carruthers (2004b), who suggests that when a blindsighted monkey presses a "not seen" key, it is not reporting about its subjective lack of awareness, but simply signalling the (perceived) absence of a light. Normal monkeys perceive, but are not subjectively aware of what they perceive. For a perception to count as subjective, Carruthers argues, the percipient must be able to make a distinction between appearance and reality. Only if an individual can understand the difference between "looks green" and "is green" can we be sure that they have the phenomenology of green. To understand the difference, argues Carruthers, an individual must have a theory of mind which enables her to grasp that how an object looks to you may not be the same as how it looks to me.
Conclusion:
  • Primary consciousness is highly suggestive but inconclusive behavioural evidence for phenomenal consciousness in non-human animals.

  • Seth, Baars and Edelman (2005) acknowledge that our interpretation of the monkey's behaviour cannot be justified by the behavioural evidence alone, but argue that an additional factor justifies the attribution of conscious feelings to monkeys: the fact that "monkeys and humans share a wealth of neurobiological charcateristics apparently relevant to consciousness (Logothetis, 2003)" (2005, italics mine).


Table 4.7 - Affective indicators of phenomenal consciousness
Definition:

Panksepp (1998, 2001, 2003f) and Liotti and Panksepp (2003) have proposed that we possess two distinct kinds of consciousness: cognitive consciousness, which includes perceptions, thoughts and higher-level thoughts about thoughts and requires a cortex, and affective consciousness, which relates to our feelings and arises within the brain's limbic system.

Criteria for affective consciousness:

For Panksepp (2002b), the numerous resemblances between human affective behaviour, neurobiology, anatomy and pharmacology, and that of non-human constitutes very strong evidence for the occurrence of affective consciousness in non-human animals:

Overwhelming evidence shows that animal brains elaborate many states of affective consciousness... Of course there is no "ultimate "proof" in science, merely the weight of evidence. To me it remains a mystery that certain scientists can ignore the mass of relevant evidence from (i) behavioral reinforcement studies; (ii) place preference-aversion studies; (iii) manifest and ubiquitous emotional vocalizations; (iv) neuroethological studies evoking the same emotional behavior from the same human/animal brain analogs and (v) the coherent translations between human and animal psychopharmacological work (Panksepp, 2002b).

An example of affective consciousness identified in non-human animals:

Panksepp and Burgdorf (2003c), in an article entitled "'Laughing' rats and the evolutionary antecedents of human joy?" discuss their recent discovery that play- and tickle-induced ultrasonic vocalisations in rats which are analogous to laughter in human children. The authors identify no less than twelve points of resemblance between rat "laughter" and children's laughter and argue that alternative non-mentalistic explanations are not well-supported.

My comment: The investigation of an "affective consciousness" in animals is scientifically productive. However, even proponents of a separate "affective consciousness" admit that it cannot be defined using behavioural criteria alone: Panksepp's own criteria in the quote cited above (2002b) include neurological and psychopharmacological analogies between humans and animals.


Table 4.8 - Behavioural indicators of phenomenally conscious pain
Definition:

The International Association for the Study of Pain (1999; see Rose, 2002) defines "pain" as a conscious experience. In particular: (i) pain is an unpleasant sensory and emotional experience associated with actual or potential tissue damage, or described in terms of such damage; (ii) pain is always subjective; (iii) pain is sometimes reported in the absence of tissue damage and the definition of pain should avoid tying pain to an external eliciting stimulus (Rose, 2002, p. 15).

Nociception, defined as "the activity induced in ...nociceptive pathways by a noxious stimulus" (2002, p. 15), "does not result in pain unless the neural activity associated with it reaches consciousness" (Rose, 2002, p. 16).

Behavioural criteria:

Various behaviours in animals have been regarded at different times as indicators of pain. These include: stress responses in all cellular organisms; nociceptive responses to noxious stimuli in nearly all animals; the presence of pain-killing opiates within the brainstems of various kinds of animals; flavour aversion learning; classical and instrumental conditioning; self-administration of analgesics; and pain-guarding. However, clinical assessments of pain in human patients do not rely solely on these criteria, as they are considered insufficient to define the occurrence of pain.

My comment:
  • I conclude that these behaviours are not sufficient to define the presence of phenomenal consciousness, as most of the responses described occur at levels of the brain below the level of consciousness. In all vertebrates, the fundamental behavioural reactions to injurious stimuli are generated by neural systems in the spinal cord and brainstem. These reactions include withdrawal of the stimulated body part, leg locomotion, struggling, facial grimacing, and in some animals vocalisation (Rose, 2002, pp. 16-17). These behavioural reactions occur in people who are unconscious - for example, people with extensive cortical damage and children born without cerebral hemispheres (Rose, 2002, pp. 13-14, 17), as well as in animals. Human beings are never aware of the neural activity taking place below the level of the cortex - whether it be in the spinal cord, brainstem or cerebral regions beneath the neocortex (Rose, 2002, p. 6).

  • There are other behavioural responses to pain which do indicate the presence of phenomenal consciousness: the cognitive-evaluative components of pain (e.g. attention to the pain, perceived threat to the individual, and conscious generation of strategies for dealing with the pain), and affective components (experiencing the pain as emotionally unpleasant). However, the occurrence of pain can only be definitively established by the patient's verbal report. For instance, people who have had surgery to their anterior cingulate gyrus to alleviate chronic pain report that the pain is still there (sensory-information component) but that it no longer bothers them (Rose, 2002, pp. 19-21).

Example - do fish feel pain?

Rose (2002), after conducting an exhaustive review of the literature relating the neurology and behaviour of fish and the clinical indicators used by neurologists to assess pain, concluded that consciousness of any kind in fish is "a neurological impossibility" (2002, p. 2). More recently, Rose (2003a) has written a devastating critique of a much-publicised report by Sneddon, Braithwaite and Gentle (2003) which claimed to have identified evidence of pain guarding in fish.


Table 4.9 - Hedonic behaviour as evidence of phenomenal consciousness in animals
Definition:

Hedonic behaviour can be defined as pleasure-seeking activity on an animal's part.

Criteria:

Various indicators have been proposed as evidence of conscious pleasure in animals, including: self-stimulation, intoxication, drug addiction, the phenomenon of satiety, and most impressively, (i) the willingness of some animals to make hedonic trade-offs whereby they expose themselves for a short time to an aversive stimulus in order to procure some attractive stimulus (Cabanac, 2003), and (ii) the occurrence of "rational" and "irrational" forms of pursuit in animals (Berridge, 2003). Irrational pursuit occurs when an animal desires something it neither likes nor expects to like, and can be identified when an animal, under the influence of some drug (e.g. dopamine), is suddenly presented with a "rewarding" stimulus, which cues hyperactive pursuit of the stimulus.

My comment:

Neither the willingness of some animals to make hedonic trade-offs whereby they expose themselves for a short time to an aversive stimulus in order to procure some attractive stimulus, nor the presence of "rational" and "irrational" forms of pursuit can be treated as an unambiguous indicator of phenomenally conscious pleasure in animals. Berridge (2001, 2003) presents evidence from human studies that irrational desires need not be conscious: humans can be influenced to like or dislike something simply by subliminal exposure to stimuli which they report being unaware of.


4.A.2.3 Neural pre-requisites for consciousness

(I would like to acknowledge a special debt of gratitude here to Dr. Jaak Panksepp, Dr. James Rose and Dr. David Edelman, for their patience in answering my queries. Any errors here are entirely my own.)

Table 4.10 summarises the neurological indicators for consciousness, which make the attribution of primary consciousness to mammals highly plausible. I propose that the only way to resolve the argumentative impasse regarding phenomenal consciousness in animals is to re-define phenomenal consciousness: instead of regarding it as tied to certain forms of behaviour, we would do better to simply define it in terms of the neurological conditions which generate it.

Problems arise when assessing consciousness in creatures whose brains are different in design from our own: here, we have to rely on analogy. As functional analogies between the brains of mammals and other animals are incomplete at present, we cannot definitively conclude that birds are phenomenally conscious, and the question of whether octopuses are conscious must remain even more speculative.

The major divisions of the brain. Diagram courtesy of Dr Anthony Walsh, Chairman, Department of Psychology, Salve Regina University, Rhode Island.
Note: the term "brain stem" is used to denote the diencephalon (hypothalamus and thalamus), mesencephalon (mid-brain) and rhombencephalon (hind-brain).
For an overview of the functions of the different parts of the brain, click here.

The reticular activating system (RAS) comprises parts of the medulla oblongata, the pons and midbrain and receives input from the body's senses - excluding smell. When the parts of the RAS are active, nerve impulses pass upward to widespread areas of the cerebral cortex, both directly and via the thalamus, effecting a generalised increase in cortical activity associated with waking or consciousness. Image courtesy of Dr. Rosemary Boon, founder of Learning Discoveries Psychological Services.

Divisions of the cerebral cortex. Diagram courtesy of Dr. Gleb Belov, Department of Mathematics, Technical University of Dresden, Germany.


Table 4.10 - Key neurological features of primary consciousness
What are the distinguishing neural properties of primary consciousness?

There are three major properties of consciousness that are fairly well accepted by neurobiologists (Seth, Baars and Edelman, 2005):

  • (1) EEG signature. Waking consciousness has a distinctive EEG signature of irregular, low-amplitude, and fast electrical activity in the brain ranging from 12 to 70 Hz.

    Conscious EEG looks markedly different from unconscious states - like deep sleep, epileptic loss of consciousness and general anaesthesia - which are all characterized by regular, high-amplitude, and slow voltages at less than 4 Hz (Seth, Baars and Edelman, 2005, italics mine).

  • (2) Cortex and thalamus. Consciousness requires a thalamus, a cortex and recursive (or reentrant) pathways between the two. The thalamus is a switching centre which functions as a "doorway" (Roth, 2003, p. 35) to the cerebral cortex - the brain's outer layer. A third region, the basal ganglia (situated deep in the forebrain) is also involved: consciousness "almost certainly" requires complex and recursive pathways between regions of the cortex, the thalamus, and the basal ganglia (David Edelman, personal email, 19 July 2004). Profound damage to other regions of the brain (e.g. the cerebellum) or even the removal of an entire hemisphere of the cerebral cortex will not destroy consciousness, but damage to the thalamus can do so. Within the thalamus, the intralaminar nuclei can be described as enabling factors of consciousness: acute bilateral loss of function in these small structures leads to immediate coma or profound disruption in arousal and consciousness (Koch and Crick, 2001).

  • (3) Widespread brain activity. Consciousness activates disparate regions of the brain's cortex, which appears to spread from the sensory cortex to parietal, prefrontal and medial-temporal areas (see illustration below), whereas input that we are not consciously aware of remains confined to localised regions of the sensory cortex. As novel, conscious tasks turn into automatic and unconscious skills with practice, activity in the cortex becomes less widespread and more focal.

Which parts of the brain are required for primary consciousness?
  • In human beings, a thalamus and reticular activating system are necessary but not sufficient conditions for primary consciousness, as shown by the fact that we are unaware of neural activity that is confined to the brainstem:

    [A] large part of the activity occurring in our brain is unavailable to our conscious awareness (Dolan, 2000; Edelman and Tononi, 2000; Koch and Crick, 2000; Libet, 1999; Merikel and Daneman, 2000). This is true of some types of cortical activity and is true for all brainstem and spinal cord activity (Rose, 2002, p. 15).

    Note: the brain stem includes the thalamus and hypothalamus, mid-brain, pons, cerebellum, medulla oblongata and spinal cord.

  • Only when neural activity reaches the cerebral cortex - the extensive outer layer of grey matter in the brain's cerebral hemispheres - does it translate into conscious awareness. This region of the brain is believed to be largely responsible for sensation, voluntary muscle movement, thought, reasoning, and memory.

    Destruction of the cerebral cortex leaves a human being in a persistent vegetative state, capable of behavioural wakefulness (e.g. eyes are open) but devoid of all conscious awareness. PVS patients are still capable of stereotypical responses to noxious stimuli (Rose, 2002, p. 13). Non-primate mammals whose cerebral hemispheres have been destroyed are capable of locomotion, postural orientation, elements of mating behavior, and fully developed behavioral reactions to noxious stimuli, but cannot survive without assisted feeding (Rose, 2002, p. 13).

Which structures in the cerebral cortex are required for primary consciousness?
  • The cerebral cortex is mostly made up of a six-cell-layered neocortex, technically known as isocortex. It is this laminated structure that supports consciousness in human beings:

    Extensive evidence demonstrates that our capacity for conscious awareness of our experiences and of our own existence depends on the functions of this expansive, specialized neocortex. This evidence has come from diverse sources such as clinical neuropsychology (Kolb and Whishaw, 1995), neurology (Young et al., 1998; Laureys et al., 1999, 2000a-c), neurosurgery (Kihlstrom et al., 1999), functional brain imaging (Dolan, 2000; Laureys et al., 1999, 2000a-c), electrophysiology (Libet, 1999) and cognitive neuroscience (Guzeldere et al., 2000; Merikle and Daneman, 2000; Preuss, 2000).

    We are unaware of the perpetual neural activity that is confined to subcortical regions of the central nervous system, including cerebral regions beneath the neocortex as well as the brainstem and spinal cord (Dolan, 2000; Guzeldere et al., 2000; Jouvet, 1969; Kihlstrom et al., 1999; Treede et al., 1999) (Rose, 2002, p. 6, italics mine).

  • Human consciousness appears to require brain activity that is diverse, temporally conditioned and of high informational complexity. The neocortex satisfies these criteria because it has two unique structural features:

    (1) exceptionally high connectivity within the neocortex and between the cortex and thalamus;

    (2) enough mass and local functional specialisation to permit regionally specialised, differentiated activity patterns (Rose, 2002, p. 7, italics mine).

Which parts of the neocortex are required for primary consciousness?
  • The neocortex is divided into primary and secondary regions (which process low-level sensory information and handle motor functions), and the associative regions. Brain monitoring techniques indicate that in human beings, only processes that take place within the associative regions of the cortex are accompanied by consciousness; activities which are confined to the primary sensory cortex or processed outside the cortex are inaccessible to consciousness (Roth, 2003, pp. 36, 38; Rose, 2002, p. 15). Consciousness thus depends on the functions of the association cortex, not primary cortex. The associative regions are distinguished by their high level of integration and large number of connections with other regions of the brain (Roth, 2003, p. 38) - corresponding to Edelman's third major property of consciousness.

  • It is now believed that slow-wave sleep, coma and PVS cause a loss of primary (and phenomenal) consciousness precisely because in these states, the ability to integrate information between different regions of the cerebral cortex is greatly reduced (Tononi, 2004; Baars, 2003).


Table 4.11 - Is consciousness possible in brain-damaged mammals, which lack a cerebral cortex?
(a) Evidence for consciousness without a cortex

Some neurologists (Panksepp, 1998, 2001, 2003f; Denton et al., 1996; Denton et al., 1999; Egan et al., 2003; Liotti et al., 2001; Parsons et al., 2000; Parsons et al., 2001) question the current neurological consensus and argue that conscious feelings may occur outside the cerebral cortex. Perhaps their most interesting evidence comes from studies of hydranencephalic children (who have little or no cerebral cortex) and decorticate animals (whose cerebral cortex has been removed). After carefully examining their technical arguments in scientific journals, I concluded that:

(i) for animals whose cerebral cortex was removed during infancy, the trauma of decerebration may have affected the neural development of their brain stem, effectively "corticising" it so that some parts were able to take over some of the functionality normally handled by a cerebral cortex (vertical plasticity);

(ii) while there appears to be a real distinction between an affective consciousness (centred in the anterior cingulate, which borders the cerebral cortex), and a cognitive consciousness (centred in the cerebral cortex), it is inaccurate to describe the former as completely non-cognitive, as it still involves crude, low-level processing of sensory inputs and hence minimal cognition;

(iii) the evidence for feelings in mammals completely lacking both a cerebral cortex and an anterior cingulate cortex is doubtful;

(iv) in any case, since the anterior cingulate has a complex layered structure and is not found in birds or reptiles, it does not help the case for feelings in non-mammals.

(b) Could consciousness be located in the cerebellum?

The cerebellum has sometimes been proposed as an alternative site for consciousness outside the cerebral cortex. It is the only structure in the brains of non-mammals lack structures with a comparable ability to rapidly integrate diverse kinds of information. Interestingly, the cerebellum, located at the back of the brain, "contains probably more neurons and just as many connections as the cerebral cortex, receives mapped inputs from the environment, and controls several outputs", and yet "lesions or ablations indicate that the direct contribution of the cerebellum to conscious experience is minimal" (Tononi, 2004), and "removal of the cerebellum does not severely compromise consciousness" (Panksepp, 1998, p. 311). The reason why activity in the cerebellum is not associated with consciousness is thought to be because different regions of the cerebellum tend to operate independently of one another, with little integration of information between regions (Tononi, 2004).


Table 4.12 - Which animals satisfy the neurological criteria for primary consciousness?
Mammals
  • Regarding Edelman's first distinguishing property of consciousness: while behavioural sleep is found in most animals, true brain sleep (which is distinguished from brain wakefulness by its EEG patterns) is confined to mammals and birds. According to Baars (2001), "all mammalian species studied so far show the same massive contrast in the electrical activity between waking and deep sleep".
  • The reentrant interactions between thalamus, cortex and basal ganglia which characterise consciousness occur in mammals.
  • Mammals possess a true neocortex (Rose, 2002, p. 10). Although some mammals have much more neocortex in proportion to their body size than others, which probably explains the wide variation in different species' performance in problem-solving tasks, "the size of the neocortex seems[s] to be irrelevant to the existence of wakefulness and perceptual consciousness" among mammals (Baars, 2001).

Birds
  • Regarding Edelman's first distinguishing property of consciousness: while behavioural sleep is found in most animals, true brain sleep (which is distinguished from brain wakefulness by its EEG patterns) is confined to mammals and birds. Birds' waking EEG patterns resemble those of mammals; and their sleep patterns are very similar to those of mammals, except that REM sleep is shorter (Kavanau, 1997, p. 257; Cartmill, 2000; Edelman, personal email, 19 July 2004).
  • To date, the reentrant interactions between thalamus, cortex and basal ganglia which characterise consciousness have only been found in mammals, but they may also occur in other vertebrates such as birds. More research needs to be done (Edelman, personal email, 19 July 2004).
  • Among animals, only mammals possess a true neocortex (Rose, 2002, p. 10). Some authors have claimed that reptiles and birds have a primordial neocortex, but it does not have the layered structure found in the brains of mammals. Thus it is generally agreed that a fully developed neocortex is present only in mammals (Nieuwenhuys, 1998). Specifically, reptiles and birds do not appear to possess any brain structures possessing the special features of the association cortex - a high level of integration and a large number of connections with other regions of the brain.

  • In the light of the above evidence, many authors are disposed to deny non-mammals any kind of conscious awareness (Edelman and Tononi, 2000; Rose, 2002, cites supporting authorities).

  • On the other hand, the dorsal ventricular ridge (DVR) in reptiles and birds serves as a principal integratory centre and exhibits a pattern of auditory and visual connections with sensory centres and the thalamus which is broadly similar to that of the sensory neocortex in mammals. Fish and amphibians lack this structure (Russell, 1999; Aboitiz, Morales and Montiel, 2000).

  • The ventricular ridges of birds are well-developed, but not laminated (Kavanau, 1997, p. 258).

    However, even though largely non-laminated, the avian telencephalon [anterior forebrain - V.T.] can generate visual performances of a complexity rivaling and even exceeding those of mammals, previously thought to have been correlated uniquely with cortical lamination... The mechanisms of visual information processing in the brains of birds are ... at least as efficient as those in the mammalian striate cortex (Kavanau, 1997, p. 257).

  • The dorsal ventricular ridge in birds and reptiles should not be regarded as homologous to the mammalian neocortex; instead, its should be viewed as analogous in its causal role in regulating animal behaviour. (Homologous structures are those which have originated from the same structure in a common ancestor; analogous structures play a similar functional role.) Experts continue to disagree as to which part of the reptilian and avian telencephalon [the anterior division of the forebrain, which includes the cerebrum] correspond to the neocortex. Currently, there is no single criterion by which homology between structures can be established. Commonly used criteria include: identical patterns of connectivity to other brain parts, neurochemistry; and embryonic origins. However, these approaches yield inconsistent results (Aboitiz, Morales and Montiel, 2000).

  • In birds, the dorsal ventricular ridge includes two areas: the hyperstriatum ventrale and neostriatum (Medina, 2002). There is good evidence that the mammalian neocortex and the neostriatum-hyperstriatum ventrale complex in birds have similar integrative roles. Interestingly, the relative size of the hyperstriatum ventrale in different species of birds is the best predictor of their feeding innovation rate, which is regarded as an indicator of cognition (Timmermans et al., 2000).
  • Tool-making ability in different bird species has also been shown to correlate with the size of their neo- and hyper-striatum ventrale (Chappell and Kacelnik, 2004). The neostriatum caudolaterale is a structure believed to correspond to the frontal cortex in mammals, which is involved in planning of movement (Lissek and Gunturkun, 2003).
  • May display cognitive reactions to pain. Evidence from studies of animal pain and hedonic behaviour lend support to the conclusion that phenomenal consciousness is confined to animals that have passed a certain neurological threshold. While the fundamental behavioural reactions to injurious stimuli (found in nearly all animals) are regulated at levels of the brain below the level of consciousness, the cognitive-evaluative components of pain (attention to the pain, perceived threat to the individual, and conscious generation of strategies for dealing with the pain), as well as the emotional unpleasantness (suffering) aspect of pain are controlled by activity in the cortex - specifically, the anterior cingulate gyrus, prefrontal cortex, and supplementary motor area (Rose, 2002, pp. 19-21). These areas are only known to occur in mammals, although birds are thought to possess analogous structures (Edelman, Baars and Seth, 2005).

Reptiles
  • Do not really meet Seth, Baars and Edelman's first criterion for consciousness. EEG patterns in sleeping reptiles show arrhythmic spiking that resembles non-REM sleep, but lacks the slow-wave patterns that characterise sleep in mammals and birds. In reptiles, sleep is regulated by the limbic system instead of the cerebrum (Kavanau, 1997; Backer, 1998).

  • Among animals, only mammals possess a true neocortex (Rose, 2002, p. 10). Some authors have claimed that reptiles and birds have a primordial neocortex, but it does not have the layered structure found in the brains of mammals. Thus it is generally agreed that a fully developed neocortex is present only in mammals (Nieuwenhuys, 1998). Specifically, reptiles and birds do not appear to possess any brain structures possessing the special features of the association cortex - a high level of integration and a large number of connections with other regions of the brain.
  • Reentrant interactions between cortex, thalamus and basal ganglia (Seth, Baars and Edelman's second feature of consciousness) have been located only in mammals to date. More research needs to be done (Edelman, personal email, 19 July 2004).
  • On the other hand, the dorsal ventricular ridge (DVR) in reptiles serves as a principal integratory centre and exhibits a pattern of auditory and visual connections with sensory centres and the thalamus which is broadly similar to that of the sensory neocortex in mammals. Fish and amphibians lack this structure (Russell, 1999; Aboitiz, Morales and Montiel, 2000).
  • The hedonic behaviour of vertebrate animals (Dawkins, 1994; Cabanac, 1999, 2003) is confined to reptiles, birds and mammals; amphibians do not exhibit it (Cabanac, 2003).
  • On the other hand, reptiles lack the three forms of consciousness (integrative consciousness, object consciousness and anticipatory consciousness) described in Table 4.2.

My verdict: phenomenal consciousness in reptiles is possible, but unlikely, given the massive neurological dissimilarities between mammals and reptiles.

Fish and amphibia
  • To a limited degree, all vertebrates possess the key structures which figure in Edelman's (2005) second distinguishing property of consciousness described above. The major subdivisions of the brain - spinal cord, hindbrain, midbrain, diencephalon, telelcephalon - are found in all vertebrates. The thalamus is also present. All vertebrate brains have a forebrain pallium, known as the cerebral cortex in mammals (Prescott, 1999, p. 9).
  • Lack the integrative center (the dorsal ventricular ridge (DVR)) found in reptiles and birds.
  • Display no hedonic behaviour whatsoever (Cabanac, 1999, 2003).
My verdict: There is overwhelming neurological evidence that fish and amphibia are not phenomenally conscious (Rose, 2002). The question we should ask is: does it really matter?

Evidence for complex agency in fish

According to Culum Brown ("New Scientist", 12 June 2004, p. 42), fish have a fantastic spatial memory, equal to that of any other vertebrate, including non-human primates. Fish can also recognise individuals and keep track of complex social relationships. The report cites the introductory chapter of Fish and Fisheries (September 2003) as claiming that fish are "steeped in social intelligence, pursuing Machiavellian strategies of manipulation, punishment and reconciliation ... exhibiting stable cultural traditions and cooperating to inspect predators and catch food".

Of course, it needs to be borne in mind that there are 28,500 species of fish; some are a lot "smarter" than others. Rose (2002) writes that "[i]n spite of the diversity and complexity among species, the behaviors of fishes are nonetheless highly stereotyped and invariant for a given species" (2002, p. 9, italics mine). He adds that these behaviours are controlled almost entirely by the brainstem (which, as we have seen, is devoid of consciousness) and remain strikingly preserved even if the fishes' cerebral hemispheres are removed.

Octopuses
  • Possess a large brain of up to 170 million neurons. Brain size relative to body mass approaches that of some birds (Edelman, Baars and Seth, 2005).
  • The most suggestive feature of conscious states is the finding of EEG patterns (including event related potentials) that look similar to those in awake, conscious vertebrates (Edelman, Baars and Seth, 2005).
  • At present, nothing even remotely like the reentrant loops found in the mammalian thalamocortical system have been identified in octopuses.
  • Pre-exposure to a typical species-problem (a crab in a plugged jar, which the octopus has the opportunity to explore) does not reduce the time the octopus takes to retrieve the crab from the jar. From these experiments some authorities have concluded that octopuses are not conscious.
  • Above the level of the neuron, the layout of the octopus brain looks completely different from that of a vertebrate brain.
  • There are more neurons in an octopus's tentacles than in its entire brain. Does it think with its arms?
My verdict: It's too soon to tell if octopuses are phenomenally conscious. We need to find more analogies between their behaviour and ours.
Honeybees
  • Although their brain-size to body-size ratio compares favourably with that of some vertebrates, the number of neurons in their brains is thought to be too low to support consciousness (Edelman, personal email, 19 July 2004)
  • On the other hand, honeybees have shown an apparent capacity to form concepts of shapes and recognise letters of the alphabet. There is good evidence that they possess the concepts of "same" and "different" (Giurfa et al., 2001). Gould (2002) considers them to be capable of insight learning.

My verdict: the fact that bees have impressive conceptual abilities does not tell us that they are conscious, for the simple reason that we do not know what consciousness evolved to do in the first place. As we have seen, intentional agency is certainly possible without agency; why not insight?


Table 4.13 - Status of philosophical arguments for the occurrence of phenomenal consciousness in different kinds of animals
The failure of similarity arguments

Where does that leave us? While the similarity arguments beloved by philosophers can be used to make a strong cumulative case that conscious feelings are widespread among mammals, the massive dissimilarities between the neocortex of the mammalian brain and the much more primitive structures in the brains of birds and reptiles effectively undermine any arguments for conscious feelings in these animals that are based on "similarity" alone.

Analogical causal arguments for phenomenal consciousness

As we cannot yet identify structures in the brains of birds that are homologous to the mammalian neocortex, any arguments for consciousness in non-mammals must therefore be based on certain structures which play an analogous causal role in regulating their behaviour, which equals or surpasses that of mammals in cognitive sophistication.

Because birds meet all the other neural requirements for consciousness and equal mammals in behavioural sophistication, I conclude that birds are probably phenomenally conscious.

Arguments against consciousness in lower vertebrates

Because the brains of all vertebrates are built according to thesame basic pattern (Rose, 2002), we can formulate an counter-analogical argument that fish and amphibians are not phenomenally conscious, on account of the massive neural and behavioural disparities between these vertebrates and conscious mammals.

Invertebrates

For invertebrates, whose brains are too unlike those of mammals to permit even a functional comparison of their brains with ours, an inferential approach is required to ascertain whether they have conscious feelings: we need to identify behaviour on their part that cannot be plausibly explained except in terms of phenomenally conscious states. Edelman, Baars and Seth (2005) make some useful suggestions regarding future neurophysiological and behavioural research with these creatures.


4.A.3 Ethical implications of animal consciousness

So far, our investigation points to at least three distinct senses in which interests can be ascribed to creatures:

For some philosophers, a capacity for phenomenal consciousness is regarded as a sine qua non for having interests and being morally relevant. However, the above summary suggests that the ethical divide between mindless organisms and animals with minimal minds is a greater one than that between animals with minimal minds and phenomenally conscious animals, and the division between the simplest organisms and assemblages lacking intrinsic finality is greater still. Animals' interests, whether conscious or not, can be measured and which can be harmed by our actions. In the Appendix, I provide specific examples of how the welfare of fish (who lack phenomenal consciousness) can be measured using specific indices, and of how it can be harmed by practices such as aquaculture and angling.

Of course, we have a strong prima facie duty to refrain from treating phenomenally conscious animals cruelly, and the duty (under more restricted circumstances) to be kind to them. For companion animals, that would entail befriending them. Logically, any animals that lacked phenomenal consciousness (such as goldfish) could not serve as true "companions".