Chapter 4 - Animal Consciousness and Higher Mental States

Back to Main Page Previous chapter Next chapter References

1. Phenomenal Consciousness in Animals

In what follows, I shall use the term phenomenal consciousness as Block (1997) does, to denote states with a subjective feel, which can be immediately recognised if not defined. Van Gulick (2004) prefers to use the term qualitative consciousness for subjective feelings or qualia (such as the experience of seeing red), and defines phenomenal consciousness in a richer sense, as including the overall structure of experience, in addition to sensory qualia.

Although scientists do not use the term "phenomenal consciousness", they employ a closely related term, primary consciousness, which "refers to the moment-to-moment awareness of sensory experiences and some internal states, such as emotions" but excludes "awareness of one's self as an entity that exists separately from other entities" (Rose, 2002, p. 6). The main criterion used by scientists to verify the occurrence of primary consciousness in an individual is his/her capacity to give an accurate verbal or non-verbal report on his/her surroundings.

As some non-human animals can give non-verbal reports of events in their environment, the relation between phenomenal and primary consciousness is therefore philosophical significant for the purposes of this thesis.

1(a) Philosophical distinctions regarding consciousness

The term "consciousness" has various scientific and philosophical usages, which have to be teased apart before we can address the perennial question of which animals have conscious feelings - or, as philosophers would say, "Which animals are phenomenally conscious?"

N.B. For additional distinctions between different kinds of consciousness, see Van Gulick (2004): http://plato.stanford.edu/entries/consciousness/

Table 4.1 - Various philosophical usages of the term "consciousness".
Based on Rosenthal (1990, 2002); Dretske (1997); Block (1995, 1997, 1998, 2001); Carruthers (2000, 2004); Lurz (2003); and Van Gulick (2004).

Term Definition Comments
1. Creature consciousness Consciousness as applied to a living organism (e.g. a bird). The distinction between creature consciousness and state consciousness was first suggested by Rosenthal (1986). Grammatically it is unexceptionable, but by itself it does not tell us which animals are phenomenally conscious.
2. State consciousness Consciousness as applied to mental states and processes (e.g. a bird's perception of a worm). See comments for creature consciousness above.

Ned Block (1997) has criticised the concept of state consciousness as a mongrel concept, and proposed a distinction between two different types of state consciousness: access consciousness and phenomenal consciousness. A mental state is access-conscious if it is poised to be used for the direct (i.e. ready-to-go) rational control of action and speech. Phenomenally conscious states are states with a subjective feel or phenomenology, which, according to Block, we cannot define but we can immediately recognise in ourselves. See comments below.

Varieties of creature consciousness (see Van Gulick, 2004)
Sentience A sentient creature is one that is capable of sensing and responding to its world (Armstrong 1981). Being conscious in this sense may admit of degrees, and just what sort of sensory capacities are sufficient may not be sharply defined (Van Gulick, 2004). See my comments below on transitive creature consciousness.
Wakefulness An organism exhibits wakefulness only if it is awake and normally alert (Van Gulick, 2004). "Wakefulness" is a vague term.

(a) There are two criteria for wakefulness - behavioural criteria (which apply to nearly all animals) and brain-based criteria (which apply to mammals and birds).

(b) Does dreaming count as a form of wakefulness? What about hypnosis? (Van Gulick, 2004)

Self-consciousness Consciousness as applied to creatures that are not only aware but also aware that they are aware (Carruthers, 2000). See my comments below on self-consciousness defined as a form of state consciousness. "As yet, the only evidence that an animal may have an awareness of the 'self' versus awareness of other individuals has been demonstrated in chimpanzees and possibly orang-utans and dolphins" (Emery and Clayton, 2004, p. 41; see also Gallup, Anderson and Shillito, 2002; Reiss and Marino, 2001). "The self-awareness requirement might get interpreted in a variety of ways, and which creatures would qualify as conscious in the relevant sense will vary accordingly. If it is taken to involve explicit conceptual self-awareness, many non-human animals and even young children might fail to qualify" (Van Gulick, 2004).
"What it is like" consciousness (Nagel, 1974). According to Nagel, a being is conscious just if there is "something that it is like" to be that creature, i.e., some subjective way the world seems or appears from the creature's mental or experiential point of view (Van Gulick, 2004). In Nagel's original example, bats are conscious because there is something that it is like" for a bat to experience its world through its echo-locatory senses. This definition, while psychologically interesting, tells us nothing about which animals possess this kind of consciousness. That is, it does not solve the Distribution Problem (Allen, 2002).
Subject of conscious states A conscious organism, according to this definition, is simply one that has conscious mental states (Van Gulick, 2004). This definition begs the question of what a conscious mental state is.
Intransitive creature consciousness Being awake as opposed to asleep (e.g. a bird possesses intransitive creature consciousness if it is awake and not asleep or comatose). The term intransitive creature consciousness is inadequate as it stands, as it assumes that "wakefulness" and "sleep" have simple, clearcut definitions. In fact, there are two kinds of criteria for sleep used by psychologists: behavioural and electrophysiological. Behavioural sleep is found in nearly all animals, including cnidaria (coelenterates, such as jellyfish), although a few animals (e.g. alligators) show no sign of this trait. (The term "behavioural sleep" has not been defined for bacteria, protoctista, plants, or fungi.)

Animal sleep that also satisfies electrophysiological criteria is called true or brain sleep. Brain sleep is defined by various criteria, including: EEG patterns that distinguish it from wakefulness; a lack of or decrease in awareness of environmental stimuli; and the maintenance of core body temperature (in warm-blooded creatures) (White, 2000). There is a massive contrast between the EEG patterns of human patients in states of global unconsciousness (deep unconscious sleep, coma, persistent vegetative state, general anaesthesia and epileptic states of absence) and the EEG of patients in a state of waking consciousness. It turns out that all mammals and birds engage in brain sleep, but no other animals (Shaw et al., 2000). Some neuroscientists believe brain sleep to be intimately related to phenomenal consciousness (Cartmill, 2000; Baars, 2001; White, 2000).

Transitive creature consciousness Consciousness of objects, events, properties or facts (Dretske, 1997). Example: a bird's consciousness of a wriggling worm that looks good to eat. Also called perception. Transitive creature consciousness, in its broadest sense, is a property of all cellular organisms, as they all possess senses of some sort (see chapter two, part B). In a narrower sense, the term applies to all organisms with a nervous system.

The vomeronasal system, which responds to pheromones and affects human behaviour, but is devoid of phenomenality (Allen, 2003, p. 13) is a good example of perception or transitive creature consciousness occurring in the absence of phenomenal consciousness. The phenomenon of blindsight in humans and monkeys (Stoerig and Cowey, 1997, pp. 536-538; p. 552) is another example.

"Transitive creature consciousness" is not a clearcut concept. For instance, blindsight varies across patients in its degree of severity, and the specificity of the responses shown by these patients varies accordingly (Stoerig and Cowey, 1997, pp. 536-538). Which of these responses one decides to count as instances of transitive creature consciousness depends on how one broadly defines "perception".

Outward transitive creature consciousness (Lurz, 2003). An animal's consciousness of an object outside its mind (e.g. a bird's consciousness of a worm) According to Lurz's (2003) definition, being outwardly conscious of some external object does not require paying deliberate attention to it.
Inward transitive creature consciousness (Lurz, 2003). An animal's consciousness of an object inside its mind, such as an unpleasant sensation (e.g. a bird's consciousness of an unpleasant sensation, such as vomiting after eating a poisonous Monarch butterfly). Lurz (2003) stresses that by inward consciousness he does not mean introspection; being inwardly conscious of one's thoughts and experiences does not require paying deliberate attention to them.
Varieties of state consciousness (see Van Gulick, 2004)
Access consciousness According to Block, "a representation is access-conscious if it is actively poised for direct control of reasoning, reporting and action" (1998, p. 3). Direct control, according to Block, occurs "when a representation is poised for free use as a premise in reasoning and can be freely reported" (1998, p. 4). Elsewhere, Block (1995) stipulates that an access-conscious state must be (i) poised to be used as a premise in reasoning, (ii) poised for rational control of action, and (iii) poised for rational control of speech.

Block (2001) now prefers to use the term global access instead of access consciousness.

Few if any non-human animals would be capable of meeting Block's criteria. The concept of access consciousness needs to be broadened before it can be legitimately applied to non-human animals.

Among the cases discussed in the philosophical literature, the strongest evidence that access consciousness can exist in the absence of phenomenal consciousness comes from recent studies of the mammalian visual system (discussed in Carruthers 2004b). Research by Milner and Goodale (1995) suggests that each human brain has two visual systems: a phenomenally conscious system that allows the subject to select a course of action but which she cannot attend to when actually executing her movements, and an access-conscious system that guides her detailed movements but is not phenomenally aware. However, these findings relate to just one sensory modality (sight) and only apply to a limited class of animals (mammals).

The case of the distracted driver, who is supposedly able to navigate his car home despite being oblivious to his visual states, is not a convincing example of access consciousness in the absence of phenomenal consciousness. The driver has phenomenal experiences, but the other matter that he is thinking about demands a much greater share of his cognitive resources, with the result that the information about the visual scene is quickly bumped from working memory and never encoded in long-term memory (Wright, 2003). See Appendix.

Rosenthal (2002) faults Block's definition of access consciousness, on the grounds that one's ability to rationally control one's actions does not require consciousness of any kind. He finds Block's new definition equally problematic: global access is neither necessary nor sufficient for consciousness.

Phenomenal consciousness Block (1995) defines phenomenally conscious states as states with a subjective feel or phenomenology, which we cannot define but we can immediately recognise in ourselves.

Recently, Block (2001) has forsworn the term "phenomenal consciousness" in favour of what he calls phenomenality.

Van Gulick (2004) defines phenomenal consciousness more narrowly than Block: it applies to the overall structure of experience and involves far more than sensory qualia (raw subjective feelings like the experience of seeing red).

Currently, there is no philosophical or scientific consensus regarding what phenomenal consciousness is, how it first arose in organisms, or even what it is for (i.e. what function it serves).

Rosenthal (2002) has criticised Block's (2001) account of phenomenal consciousness for its ambiguity between two very different mental properties, which Rosenthal refers to as thin phenomenality (the occurrence of a qualitative character without a subjective feeling of what it's like) and thick phenomenality (the subjective occurrence of mental qualities). Rosenthal considers only the latter to be truly conscious.

Block argues that human beings are capable of having phenomenally conscious experiences without access consciousness, due to lack of attention or rapid memory loss. In his discussion of the refrigerator that suddenly goes off, Block cites "the feeling that one has been hearing the noise all along" as evidence for inattentive phenomenality (1998, p. 4). The most straightforward way of explaining this case is the hypothesis that "there is a period in which one has phenomenal consciousness of the noise without access consciousness of it" (1998, p. 4).

Qualitative states On this definition, a state as conscious just if it has or involves qualitative or experiential properties of the sort often referred to as "qualia" or "raw sensory feels" (e.g. the sensation of seeing red) (see Van Gulick, 2004). See comments above on phenomenal consciousness.
"What-it-is-like" states A mental state is conscious in this sense if and only if there is something that it is like to be in that state (e.g. the feeling of "what it is like" to be a bat described by Nagel, 1974). See above comments on Nagel and on phenomenal consciousness.
Reflexive consciousness, also known as introspective or monitoring consciousness. Also known as states one is aware of. According to Block (1995, 2001), a state is reflexively conscious if it is the object of another of the subject's states (e.g. when I have a thought that I am having an experience). Alternatively, "a state S is reflexively conscious just in case it is phenomenally presented in a thought about S" (Block, 2001, p. 215). Similarly, Rosenthal (1986, 1996) defines a conscious mental state as a mental state one is aware of being in.

Conscious states in this sense require the existence of mental states that are about other mental states. "To have a conscious desire for a cup of coffee is to have such a desire and also to be simultaneously and directly aware that one has such a desire" (Van Gulick, 2004).

For some philosophers (e.g Rosenthal, 2002) awareness of one's mental states is a requirement for having phenomenal consciousness. Others (e.g. Dretske, 1995) contend that sensory awareness is sufficient for consciousness. However, it has yet to be shown that any non-human animals are capable of reflexive consciousness.

Lurz (2003) considers the idea of a non-human animal having thoughts of any kind about its mental states to be highly implausible. (In Lurz's "same-order" account, a creature's experiences are conscious if it is conscious of what its experiences represent - i.e. their intentional object - even if they are not conscious that they are perceiving.)

Self-consciousness Block (1995) defines self-consciousness as the possession of the concept of the self and the ability to use this concept in thinking about oneself. "As yet, the only evidence that an animal may have an awareness of the 'self' versus awareness of other individuals has been demonstrated in chimpanzees and possibly orang-utans and dolphins" (Emery and Clayton, 2004, p. 41; see also Gallup, Anderson and Shillito, 2002; Reiss and Marino, 2001). This evidence comes from mirror tests. However, some philosophers (Leahy, 1994) argue that mirror tests merely indicate that an animal possesses consciousness of its own body, as opposed to true self-consciousness.
Narrative consciousness This is the "stream of consciousness", regarded as an ongoing more or less serial narrative of episodes from the perspective of an actual or merely virtual self. A person's conscious mental states are simply those that appear in her stream of consciousness. There are no currently accepted criteria for narrative consciousness that can tell us which animals possess it.

The most important distinction made in the philosophical literature on consciousness is between creature consciousness, or consciousness applied to a living creature (e.g. a bird) and state consciousness, which applies to a creature's mental states (e.g. a bird's perceptions of a worm) (see Rosenthal, 1986).

Whereas creature consciousness can be intransitive (e.g. a bird's being awake and not asleep or comatose) or transitive (e.g. bird's consciousness of a worm), state consciousness can only be intransitive. As Dretske (1997) puts it:

States ... aren't conscious of anything. They are just conscious (or unconscious) full stop (1997, p. 4).

However, Ned Block (1997) has criticised the concept of state consciousness as a mongrel concept, and proposed a distinction between two different types of state consciousness: access consciousness and phenomenal consciousness (described in the table above).

The question of which animals have subjective feelings or conscious experiences can now be re-formulated as: which animals are phenomenally conscious, as opposed to merely creature-conscious or access-conscious?

I contend that the philosophical distinctions between the various forms of consciousness are problematic for three main reasons:

(a) they are poorly defined;

(b) they confuse conceptual with real distinctions;

(c) they overlook what appear to be nomic connections between some of the different concepts of consciousness, suggesting that what appear to be different concepts are in reality inseparable.

Transitive and intransitive creature consciousness

First, transitive and intransitive creature consciousness are both defined ambiguously in empirical terms. Transitive creature consciousness, in its broadest sense, is a property of all cellular organisms: as we saw in chapter two, even bacteria possess senses of some sort. In a narrower sense, the term applies to all organisms with a nervous system. The fact that individual organisms display various degrees of responsiveness to their surroundings creates further complications: what level of responsiveness is required for transitive creature consciousness? The term "intransitive creature consciousness" is also poorly defined, as it fails to distinguish between two very different forms of wakefulness (and sleep) in animals: wakefulness defined according to behavioural criteria (found in virtually all animals but not in bacteria, protoctista, plants or fungi) and wakefulness defined according to brain-related electrophysiological criteria, which are unique to mammals and birds (Shaw et al., 2000).

Second, the transitive and intransitive creature consciousness differ greatly in their scope within the animal kingdom. The distinction between them is not a grammatical one but a real one. The commonly held notion (see, for instance, Carruthers, 2004b) that transitive creature consciousness (which is we saw is common to all cellular life-forms) presupposes intransitive consciousness is refuted by the fact that both brain and behavioural sleep (i.e. the absence of intransitive consciousness) have been defined only for animals (Kavanau, 1997, p. 258), who as we saw in chapter two comprise only a tiny twig on the tree of life. Moreover, some animals (such as alligators) exhibit neither brain nor behavioural sleep (Kavanau, 1997, p. 258).

Third, the distinctions between these forms of consciousness and phenomenal consciousness should not be taken to mean that the former can exist without the latter. To be sure, transitive creature consciousness can certainly occur in the absence of phenomenal consciousness - as illustrated by the vomeronasal system, which responds to pheromones and affects human behaviour, but is devoid of phenomenality (Allen, 2003, p. 13) and the phenomenon of blindsight in humans and monkeys (Stoerig and Cowey, 1997, pp. 536-538; p. 552). Likewise, intransitive consciousness in its most general form (behavioural wakefulness) can exist in the absence of phenomenal consciousness, as shown by the condition of persistent vegetative state (PVS), which has been defined as "chronic wakefulness without awareness" (JAMA, 1990). I describe this condition in the Appendix. PVS patients display a variety of wakeful behaviours, all of which are generated by their brain stems and spinal cords. Studies have shown that activity occurring at this level of the brain is not accessible to conscious awareness in human beings (Rose, 2002, pp. 13-15; Roth, 2003, p. 36).

It is a curious fact that the philosophical literature on consciousness has overlooked a massive body of neurological evidence from EEG studies, suggesting a nomic connection between brain wakefulness (i.e. intransitive creature consciousness as defined by brain-related criteria) and primary consciousness: the former seems to guarantee the occurrence of the latter in all human beings studied to date (Cartmill, 2000; Baars, 2001; White, 2000). ("Primary consciousness" is a scientific term describing the ability of an individual to report events in his/her surroundings; thus its occurrence in human beings is commonly taken to indicate the presence of phenomenal consciousness.) According to Baars (2001), there is a sharp contrast between the EEG patterns of human patients in states of global unconsciousness (deep unconscious sleep, coma, PVS, general anaesthesia and epileptic states of absence) and the EEG of patients in a state of waking consciousness, who are able to give "accurate, verifiable report" of events in their surroundings (Barrs, 2001, p. 35). Additionally, everyday experience shows that no matter how hard we try, we cannot rouse a sleeping person to brain wakefulness without thereby making her (a) alert to her surroundings (primary-conscious) and (b) phenomenally conscious.

Moreover, we now know which kinds of animals satisfy the brain-based criteria for wakefulness, and research to date suggests that at least some of these animals possess primary consciousness while awake, which suggests that they are phenomenally conscious too. (However, a few philosophers, such as Carruthers, question whether primary consciousness should be used to infer the occurrence of phenomenal consciousness.) Some scientists (Baars, 2001; Cartmill, 2000) have suggested that wakefulness - defined according to brain criteria - is a reliable indicator of phenomenal consciousness across all animal species. The connection between having a brain that is awake and being phenomenally conscious may well turn out to be nomic in animals.

Access consciousness, phenomenal experience and other varieties of state consciousness in animals

Block (1995) has certainly performed a valuable service to philosophy in drawing a conceptual distinction between access and phenomenal consciousness. However, I contend that Block's conceptual distinction fails to carve reality at the joints for animals, and thus sheds little light on the question of which animals possess phenomenal consciousness. In order to do answer this question, Block needs to define a "weaker" notion of consciousness that many animals could plausibly be said to satisfy even if they lacked phenomenal consciousness. The problem with Block's notion of access consciousness is that it is, if anything, even more cognitively demanding than phenomenal consciousness, as it occurs only when an internal representation is "poised for free use as a premise in reasoning and can be freely reported" (1998, p. 4). It is therefore puzzling that Block elsewhere claims that not only do some non-linguistic animals (e.g. chimps) have access consciousness states (1995, p. 238), but "very much lower animals" are access-conscious too (1995, p. 257).

Other proposed categories of animal consciousness

Animals appear to possess various kinds of consciousness which have been generally neglected by both scientists and philosophers (but see Sjolander, 1993; Dennett, 1995; and Grandin, 1998). I discuss two of these in the Appendix. I have chosen to refer to these categories of consciousness as integrative consciousness (which gives an animal access to multiple sensory channels and can integrate information from all of them) and object consciousness (awareness of object permanence).

What makes phenomenal states subjective? An outline of the current philosophical positions

The contemporary philosophical debate about animal consciousness is split into several camps, with conflicting intuitions regarding the following four inconsistent propositions (Lurz, 2003):

1. Conscious mental states are mental states of which one is conscious.
2. To be conscious of one's mental states is to be conscious that one has them.
3. Animals have conscious mental states.
4. Animals are not conscious that they have mental states.

Table 4.2 - Key positions in the contemporary philosophical debate on "consciousness" (Lurz, 2003)
School of thought Description of school's position Comments
Higher-order representational (HOR) theories of consciousness Accept propositions 1 and 2, and either 3 or 4. HOR theorists argue that a mental state (such as a perception) is not intrinsically conscious, but only becomes conscious as the object of a higher-order state. Higher-order states are variously conceived as thoughts (by HOT theorists) or as inner perceptions (by HOP theorists). Dretske (1997) objects that HOR theories fail to explain the practical function of consciousness and thus effectively marginalise it. More recently, higher-order theorists have formulated their own proposals regarding the function of consciousness (see Carruthers, 2000). For an overview of theories of the function of phenomenal consciousness, see Table 4.3 below.
Exclusive HOR theorists (Carruthers (2000, 2004) Accept propositions 1, 2 and 4 but reject 3 - that is, they allow that human infants and non-human animals have beliefs, desires and perceptions, but insist (Carruthers, 2000, p. 199) that we can explain their behaviour perfectly well without attributing conscious beliefs, desires and perceptions to them. Carruthers' denial of phenomenal consciousness to animals entails the bizarre conclusion that only humans have subjective feelings. If this is correct, the proven effectiveness of "pet therapy" (Midgley, 1993) becomes very difficult to account for. Carruthers' view also implies that friendship between humans and other animals is inappropriate.
Inclusive HOR theorists (Rosenthal, 1986, 2002) Accept propositions 1, 2 and 3 but reject 4. Rosenthal (2002) construes an animal as having a thought that it is in some state. Such a thought requires a minimal concept of self, but "any creature with even the most rudimentary intentional states will presumably be able to distinguish between itself and everything else" (2002, p. 661). Rosenthal's (2002) HOT theory requires an animal to have the higher-order thought that it is in a certain state, before the state can qualify as conscious. This is a very strong requirement.

According to HOT theorists, mental states do not become conscious merely by being observed; they become conscious by being thought about by their subject. This means that animals must have non-observational access to their mental states. As Lurz (2003) remarks, this is an implausible supposition for any non-human animal: "it is rather implausible that my cat... upon espying movement in the bushes... is conscious that she sees movement in the bushes, since it is rather implausible to suppose ... that my cat has thoughts about her own mental states".

Lurz's (2003) same-order (SO) account Presents itself as a via media between HOR and FOR. Accepts propositions 1, 3 and 4 but rejects 2. Lurz grants the premise that to have conscious mental states is to have mental states that one is conscious of them, but queries the assumption (shared by HOR and FOR theorists) that to be conscious of one's mental states is to be conscious that one has them. Lurz suggests that a creature's experiences are conscious if it is conscious of what its experiences represent - their intentional object - even if they are not conscious that they are perceiving.

Lurz's example is that of a cat who notices a movement in the bushes and then behaves in a way that warrants our saying that she is paying attention to it. This (according to Lurz) implies that she is conscious of what she is seeing, and hence conscious of what a token visual state of hers represents, and thus in some way conscious of the mental state itself.

The cognitive requirements that Lurz is imposing on animals are hardly exacting, as they seem to require nothing more than (a) a capacity for paying attention, which exists in a rudimentary form even in fruit-flies, and (b) a capacity for object recognition, which is also found in honeybees (see Appendix to chapter two).

However, the available neurological evidence suggests that these animals lack the wherewithal for consciousness (see below).

First-order representational (FOR) accounts of consciousness (Dretske, 1995) Accept propositions 2, 3 and 4 but reject 1. First order representational (FOR) theorists believe that if a perception has the appropriate relations to other first-order cognitive states, it is phenomenally conscious, regardless of whether the perceiver forms a higher-order representation of it (see Wright, 2003). For example, Dretske argues that a mental state becomes conscious simply by being an act of creature consciousness. Thus an animal need not be aware of its states for them to be conscious.

On this account, consciousness has a very practical function: to alert an animal to salient objects in its environment - e.g. potential mates, predators or prey. However, attention is not a pre-requisite for consciousness: "You may not pay much attention to what you see, smell, or hear, but if you see, smell or hear it, you are conscious of it" (Dretske, 1997, p. 2).

Lurz (2003) argues against Dretske on linguistic grounds: it is counter-intuitive to say that an animal could have a conscious experience of which it was not conscious. However, this argument overlooks the possibility that there may be different degrees of phenomenality - as shown by phenomena such as peripheral vision, so-called "distracted driving" and change blindness (for a discussion, see Hardcastle, 1997; Wright, 2003).

More telling is the argument that while some of a conscious animal's experiences may well be first-order states as Dretske proposes, it would be improper to describe the creature as phenomenally conscious if all of its experiences were of this sort.

Dretske's (1997) assertion that attention is not required for consciousness is at odds with his argument that consciousness must serve a practical function in promoting an animal's survival. An animal completely lacking the ability to pay attention to a salient stimulus would not survive very long in the wild.

Finally, experimental evidence indicates that transitive creature consciousness can occur in the absence of phenomenal consciousness - as illustrated by the vomeronasal system, which responds to pheromones and affects human behaviour, but is devoid of phenomenality (Allen, 2003, p. 13) and the phenomenon of blindsight in humans and monkeys (Stoerig and Cowey, 1997, pp. 536-538; p. 552).

I would like to make some general comments about the positions summarised in the table above.

The temptation to assess the merits of rival accounts of what phenomenal consciousness by resorting to conceptual "thought experiments" should be firmly rejected. To suppose that if I can imagine X in the absence of consciousness, then X is insufficient to explain consciousness, is to assume an aprioristic acccount of explanation, of the sort which Hume effectively criticised. Carruthers (2001) proposes a more modest explanatory goal for accounts of phenomenal consciousness:

[A] reductive explanation of something - and of phenomenal consciousness in particular - doesn't have to be such that we cannot conceive of the explanandum (that which is being explained) in the absence of the explanans (that which does the explaining). Rather, we just need to have good reason to think that the explained properties are constituted by the explaining ones, in such a way that nothing else needed to be added to the world once the explaining properties were present, in order for the world to contain the target phenomenon (Carruthers, 2001).

We should also be sceptical of any attempt to resolve disputes about animal consciousness by appealing to existing linguistic norms relating to the use of the word "conscious", for three reasons. First, these norms were originally formulated by people, for people: thus they are likely to be only partially applicable to other species.

Second, the longevity of the philosophical dispute about consciousness suggests that none of the positions has a monopoly on proper usage of the term "conscious". The fact that each of the four propositions outlined above has some intuitive appeal when taken alone, suggests that the linguistic norms relating to the use of the term "conscious" are at best vague and at worst inconsistent.

Third, all of the current philosophical positions on consciousness appear to share an underlying assumption: that the difference between phenomenally conscious mental states and other states can be formulated in terms of concepts which already exist within our language. This assumption may be turn out to be wrong: we may require new linguistic terminology to formulate this distinction properly.

A final point is that Lurz's tetralemma assumes that the question "What makes a mental state phenomenally conscious?" has a single answer. However, "phenomenal consciousness" (or phenomenality, as Block now prefers to call it) may come in different grades or varieties, which need to be carefully distinguished (see Block, 2001). (Rosenthal (2002) distinguishes between "thin" and "thick" phenomenality, although he considers only the latter truly conscious.)

Carruthers (2000) has mounted a sustained philosophical attack on the possibility of phenomenal consciousness in animals. Carruthers' philosophical arguments in support of his theory of consciousness have been subjected to a detailed critique by Allen (2003) (summarised in the Appendix). However, Carruthers' substantive point, that an ability to distinguish between the way things appear and the way they really are is a pre-requisite for phenomenal consciousness, remains philosophically tenable. (Allen (2002) himself proposes that any animals that can learn to correct their perceptual errors are phenomenally conscious, though he does not make it a necessary requirement.) I argue in the Appendix that the meager experimental evidence available suggests that only human beings meet Carruthers' requirement for phenomenal consciousness.

Even if Carruthers' sceptical position proves to be consistent with the experimental evidence, there could still be other good reasons for regarding the occurrence of phenomenal consciousness in at least some animals as a properly basic belief, as Searle does:

[I]t doesn't matter really how I know whether my dog is conscious, or even whether or not I do know that he is conscious. The fact is, he is conscious and epistemology in this area has to start with this fact (Searle, 1998, p. 50, italics mine).

In support of this position, I propose a transcendental argument, which takes as its starting point the fact that human beings throughout history have befriended certain animals (such as cats and dogs) and benefited emotionally thereby (see Midgley, 1993). To affirm the reality of this mutual friendship entails affirming the conditions that make it possible - one of which is the fact that the animals involved are phenomenally conscious. However, this argument could only be applied to a handful of animal species. Treating animal consciousness as properly basic fails to resolve the question of which non-human animals possess consciousness and which ones lack it (the Distribution Question) (Allen, 2003, pp. 7-8).

In summing up, I would suggest that the "original sin" of philosophers who have formulated theories of phenomenal consciousness was to suppose that the requirements for subjectivity could be elucidated on an a priori basis, through careful analysis. Now, an analytical approach might work if we had a good idea of what consciousness is, or why it arose in the first place, or what it is for. In fact, we know none of these things. Cotterill (2001) notes that "[a] major problem confronting those who would explain consciousness is its apparently multifarious nature; there seems to be too large an inventory of its advantages to permit a succinct definition" (2001, p. 19). The table below lists a selection of theories of why consciousness exists.

Table 4.3 - Theories of what consciousness is for: a brief overview
1. Consciousness is an epiphenomenon (Huxley).
2. Conscious feelings exist because they motivate an animal to seek what is pleasant and avoid what is painful (Aristotle).
3. Consciousness arose because it enabled its possessors to unify or integrate their perceptions into a single "scene" that cannot be decomposed into independent components (Edelman and Tononi, 2000).
4. Consciousness arose because it was more efficient than programming an organism with instructions enabling it to meet every contingency (Griffin, 1992).
5. Consciousness arose to enable organisms to meet the demands of a complex environment. However, environmental complexity is multi-dimensional; it cannot be measured on a scale (Godfrey-Smith, 2002).
6. Consciousness evolved to enable animals to deal with various kinds of environmental challenges their ancestors faced (Panksepp, 1998b).
7. Consciousness arose so as to enable animals to cope with immediate threats to their survival such as suffocation and thirst (Denton et al., 1996; Liotti et al., 1999; Parsons et al., 2001).
8. Consciousness gives its possessors the advantage of being able to guess what other individuals are thinking about and how they are feeling - in other words, a "theory of mind" (Whiten, 1997; Cartmill, 2000).
9. Consciousness arises as a spin-off from such a theory-of-mind mechanism (Carruthers, 2000).
10. Brain activity (as defined by EEG patterns) that supports consciousness in mammals is a precondition for all their complex array of survival and reproductive behaviour (e.g. locomotion, hunting, evading predators, mating, attending, learning and so on) (Baars, 2001).
11. Activities that are essential to the survival of our species - e.g. eating, raising children - require consciousness (Searle, 1999). It must therefore have a biological role.
12. Animals receive continual bodily feedback from their muscular movements when navigating their environment. Conscious animals have a very short real-time "muscular memory" which alerts them to any unexpected bodily feedback when probing their surroundings. A core circuit in their brains then enables them to cancel, at the last second, a movement they may have been planning, if an unexpected situation arises. This real-time veto-on-the-fly may save their lives (Cotterill, 1997).

Recently, Cotterill (2001) has made a bold attempt to pare down these explanations, by arguing that many of the benefits of consciousness are tied to the same underlying mechanism, but a scientific consensus on the "why" of consciousness remains elusive. On the other hand, there is an abundance of neurological data relating to how it originates in the brain. I suggest that this is the best place to look for answers on animal consciousness.


1(b) Scientific findings regarding consciousness

Table 4.4 - Different scientific usages of the term "consciousness"
Term: Primary consciousness (also called "core consciousness" or "feeling consciousness").

Definition: "Primary consciousness refers to the moment-to-moment awareness of sensory experiences and some internal states, such as emotions" (Rose, 2002, p. 6).

Criteria: "From the clinical perspective, primary consciousness is defined by

(1) sustained awareness of the environment in a way that is appropriate and meaningful,

(2) ability to immediately follow commands to perform novel actions, and

(3) exhibiting verbal or nonverbal communication indicating awareness of the ongoing interaction...

Thus reflexive or other stereotyped responses to sensory stimuli are excluded by this distinction" (Rose, 2002, p. 6, italics mine).

How are these criteria assessed in non-human animals?

The standard observational index used to measure Rose's first and third criteria is "accurate, verifiable report" (Baars, 2001, p. 35):

In humans reports do not have to be verbal; pressing a button, or any other voluntary response, is routinely accepted as adequate in research" (Baars, 2001, p. 35).

Since the criteria for primary consciousness allow for nonverbal communication, they can be applied to at least some non-human animals. For instance, recent experiments by Stoerig and Cowey (1997, p. 552) have shown that a monkey can be trained to respond to a stimulus in its visual field by touching its position on a screen, and to a blank trial (no stimulus) by touching a constantly present square on the screen that indicates "no stimulus". The monkey's ongoing responses fit the requirements for a nonverbal "accurate, verifiable report" (Baars, 2001) indicating "sustained awareness of the environment" (Rose, 2002, p. 6).

According to Stoerig and Cowey (1997, p. 552), lack of awareness has also been experimentally verified in studies of monkeys with blindsight, a condition in which patients with damage to the visual cortex of the brain lose their subjective awareness of objects in a portion of their visual field, but sometimes retain the ability to make visual discriminations between objects in their blind field.

Recent research has also shown ways in which an animal could satisfy Rose's second criterion ("ability to immediately follow commands to perform novel actions"). For instance, some dolphins, after having been trained in an artificial language of 40 "words" - actually hand and arm gestures - can respond correctly to novel combinations of words (Hart, 1996, pp. 74-75; Herman, 2002, pp. 278-279). Sea lions can respond to novel instructions with up to seven signs, asking them, for instance, to bring a small black ball to a large white cone (Schusterman et al., 2002). The ability of Alex, the African grey parrot, to correctly "distinguish quantities of objects, including groups of novel items, heterogeneous collections, and sets in which objects are randomly arrayed" (Pepperberg, 1991), also seems to meet the novelty criterion.

On the other hand, the impressive ability of honeybees to successfully distinguish "same" and "different" in match-to-sample trials (Giurfa et al., 2001) would not satisfy Rose's second criterion: although the bees were performing a novel action, they were not following a command to do so.

Term: Secondary consciousness (also known as "extended consciousness" or "self-awareness").

Definition and criteria: "Higher-order consciousness includes awareness of one's self as an entity that exists separately from other entities; it has an autobiographical dimension, including a memory of past life events; an awareness of facts, such as one's language vocabulary; and a capacity for planning and anticipation of the future" (Rose, 2002, p. 6).

How are these criteria verified in non-human animals?

Secondary consciousness is commonly thought to be unique to human beings and possibly chimpanzees (Rose, 2002, p. 7). One recent development is that the best evidence for most of the indicators for secondary consciousness comes from birds (Emery and Clayton, 2004), with the notable exception of self-consciousness.

Mirror tests are commonly used to show that an animal possesses a concept of self. Gallup, Anderson and Shillito (2002) have successfully defended the mirror test against attempts to discredit its validity. However, some philosophers (Leahy, 1994, p. )argue that mirror tests merely indicate that an animal possesses consciousness of its own body, as opposed to true self-consciousness.

As yet, the only evidence that an animal may have an awareness of the 'self' versus awareness of other individuals has been demonstrated in chimpanzees and possibly orang-utans and dolphins (Emery and Clayton, 2004, p. 41).

Reiss and Marino (2001) have reported that dolphins pass the mirror test.

Field studies of animals' social interactions (especially signalling and deception) have been used as evidence that they possess a rudimentary theory of mind, but Emery and Clayton (2004) argue that alternative, non-intentional explanations of the behaviour are often possible. It is also unclear how much of the behaviour is unlearned. Emery and Clayton (2004) argue that only controlled laboratory studies on a wide variety of species can establish whether some of them have a theory of mind. At present, the best evidence from controlled laboratory studies for a "theory-of-mind" in animals comes from experimental studies of birds: ravens and western scrub-jays. These birds will delay caching excess food for later use if other individuals are in the vicinity, and wait until would-be thieves are distracted or have moved away before they resume caching. The birds appear to learn this behaviour from their own experience of pilfering other birds' caches (Emery and Clayton, 2004, pp. 17-23). There is tentative evidence for a rudimentary theory of mind in animals as diverse as chimpanzees, dogs and elephants (Horowitz, 2002; Nissani, 2004), but the evidence is not as rigorous compares as the tightly controlled laboratory studies of ravens and scrub jays, described by Emery and Clayton (2004, pp. 15-17). A study by Povinelli (1998) is often cited as evidence that chimpanzees lack a theory of mind, although research by Nissani (2004) seems to overturn some of Povinelli's results.

Language: Language ability in animals is assessed by a variety of procedures, including studies of vocal learning (only six animal groups - humans, cetaceans, bats, parrots, songbirds, and hummingbirds - are capable of this feat (Jarvis et al., 2000)), training animals to use artificial sign language or lexigrams (Budiansky, 1998, summarises work done with chimpanzees and bonobos), interrogation of animals to verify their ability to apply categorial concepts to novel objects, by asking questions such as "What shape?" (Pepperberg, 2002), and controlled studies of allegedly referential alarm calls made by animals to signal the presence of a predator (Cheney and Seyfarth, 1990; Slobodchikoff, 2002; Evans, 2002). Although there is good evidence of referential communication in non-human animals (monkeys and birds), at least some of these animals appear to lack a theory of mind (Cheney and Seyfarth, 1996). Despite the richness of their conceptual representations (Pepperberg, 2002), the vocabulary used by non-human animals is typically very limited, unlike that of human beings. There is no evidence of creative production of sounds for new situations (Hauser, Chomsky and Fitch, 2002, pp. 1575-6).

Autobiographical memory is probably unique to human beings. The ability of animals to recall episodes from their past (episodic memory) is assessed by controlled field studies in which animals are required to recall "when" and "where" information relating to specific items or individuals. At present, the only good evidence for episodic memory in animals comes not from primates but from birds. Western scrub jays can recall information about when a particular food item was cached, as well as what was cached and where (Emery and Clayton, 2004, p. 32). The apparent ability of non-human primates to keep track of who did what to whom and where is currently considered too difficult to test under controlled experimental conditions (Emery and Clayton, 2004, p. 32).

Field observations of great apes are not sufficiently rigorous to establish that future planning is required to account for their behaviour, and most laboratory studies claiming to have tested future planning in animals involve short retention intervals that only relate to the animal's immediate future. Currently the best evidence for future planning in animals is the food-caching behaviour of scrub jays (Emery and Clayton, pp. 35-36).

Rose (2002) remarks that "[m]ost discussions about the possible existence of conscious awareness in non-human animals have been concerned with primary consciousness" (2002, p. 6).

Critical evaluation of scientific criteria for primary consciousness

Although the evidence for primary consciousness in animals looks promising, there are methodological problems associated with applying accurate report, the standard behavioural criterion for consciousness in humans (Baars, 2001) to other species, because their ethogram is compatible with the physical response required. While a manually dextrous animal like a monkey can press a button to report what it sees, a fish cannot. Of course, we could simplify the test by simply requiring the fish to behaviourally discriminate between different visual stimuli, but as Seth, Baars and Edelman (2005) point out, this creates a slippery slope: even single-celled creatures can perform such discriminations, and any way there is no reason to call them (phenomenally) conscious.

The procedure of testing animal awareness by commanding them to perform novel actions is also philosophically problematic. How novel do the actions have to be? ("Raise your right paw and hold it in front of your nose." - I think Fido would flunk this one, although dolphins have shown an impressive ability to imitate human motor acts without requiring any training (Herman, 2002).) What if the action is actually a novel combination of simple actions, each of which the animal has rehearsed thousands of times?

The notion of "following a novel command" suffers from another limitation: it is inapplicable in situations where the animal's sensory capacities do not allow it to realise that it is being given a command. A very small animal like a fly would have trouble even identifying a large object such as its human trainer.

I conclude that while the concept of primary consciousness is legitimately applicable to some non-human animals, it is an imperfect tool, and does not exhaust the notion of phenomenal consciousness in animals.

The relationship of primary consciousness to phenomenal consciousness

Before addressing the relationship between primary and phenomenal consciousness, I wish to point out that the ontological question of what primary consciousness is ("moment-to-moment awareness") is quite distinct from the epistemological question of which indicators scientists should use to identify it. The latter cannot define the former. The theoretical concept of primary consciousness appears to be the same as that of subjective awareness or phenomenal consciousness - except for the fact that it explicitly excludes higher-order states, which are also phenomenal. Whether the clinical/experimental concept used by scientists successfully captures the necessary and/or sufficient conditions for subjective awareness is another question altogether.

(i) Is a capacity for primary consciousness a necessary condition for the warranted ascription of phenomenal consciousness to an animal?

The occurrence of dreams shows that the the ability to give an accurate, ongoing report of one's surroundings (primary consciousness) is not a necessary condition for phenomenal consciousness. But dreams are a derivative form of consciousness, whose content depends on what we experience when awake. More telling is the argument that some human beings (e.g. newborn babies) are commonly said to have conscious feelings but are incapable of giving accurate report. However, it is likely that babies' brains, while mature enough for having experiences, still lack the degree of motor co-ordination required for giving an accurate report about them. Adult animals, on the other hand, do not suffer from a lack of co-ordination, so they could reasonably be expected to meet the criterion of "accurate, verifiable report".

A few neurologists consider the accurate report criterion to be too "cognitive". Panksepp (1998, 2001, 2003f) and Liotti and Panksepp (2003) have proposed that we possess two distinct kinds of consciousness: cognitive consciousness, which includes perceptions, thoughts and higher-level thoughts about thoughts and requires a cortex, and affective consciousness, which relates to our feelings and arises within the brain's limbic system. I discuss Panksepp's arguments in the Appendix, where I conclude that there is good evidence for two different kinds of consciousness in animals, the two forms of consciousness he describes are not completely independent of one another: some brain structures (e.g. the anterior cingulate cortex) play a role in regulating both. Moreover, the notion that an animal that completely lacked any capacity for cognitive consciousness could still possess affective consciousness is philosophically problematic, as it assumes (contrary to our conclusions in chapter three) that we can meaningfully attribute emotions to creatures with no beliefs.

Of course, if it were possible to nominate certain kinds of behaviour other than accurate reporting that reliably indicated the presence of phenomenal consciousness, then primary consciousness would no longer be required to justify the ascription of phenomenal consciousness to animals. I examine several such proposed indicators in the Appendix: in particular, Panksepp's criteria for affective consciousness; behavioural indicators of pain; and hedonic behaviour in animals. Briefly, I conclude that:

(i) while the behavioural criteria nominated by Panksepp probably do indicate the presence of a rudimentary form of phenomenal consciousness, none of them is unambiguous, and some of the behaviours cited can be explained by non-conscious mechanisms;

(ii) none of the behavioural response patterns that are commonly proposed as measures of pain in animals can be regarded as an unambiguous indicator of phenomenally conscious animal pain per se, and the few behaviours that do indicate pain (e.g. generation of strategies for dealing with the pain) are the more "cognitive" ones;

(iii) neither the willingness of some animals to make hedonic trade-offs whereby they expose themselves for a short time to an aversive stimulus in order to procure some attractive stimulus, nor the presence of "rational" and "irrational" forms of pursuit can be treated as an unambiguous indicator of phenomenally conscious pleasure in animals.

I conclude that for the time being, accurate report is probably the best behavioural indicator we have for the occurrence of phenomenal consciousness in animals.

(ii) Is a capacity for primary consciousness sufficient to warrant the ascription of phenomenal consciousness to an animal?

The inference from the discovery of blindsight in monkeys to the conclusion that normal monkeys are subjectively aware of what they see seems an obvious one. One prominent dissenter is Carruthers (2004b), who sugests that when a blindsighted monkey presses a "not seen" key, it is not reporting about its subjective lack of awareness, but simply signalling the (perceived) absence of a light. Normal monkeys perceive, but are not subjectively aware of what they perceive. For a perception to count as subjective, Carruthers argues, the percipient must be able to make a distinction between appearance and reality. Only if an individual can understand the difference between "looks green" and "is green" can we be sure that they have the phenomenology of green. To understand the difference, argues Carruthers, an individual must have a "theory of mind" which enables her to grasp that how an object looks to you may not be the same as how it looks to me.

Seth, Baars and Edelman (2005) acknowledge that our interpretation of the monkey's behaviour cannot be justified by the behavioural evidence alone, but argue that an additional factor justifies the attribution of conscious feelings to monkeys: the fact that "monkeys and humans share a wealth of neurobiological charcateristics apparently relevant to consciousness (Logothetis, 2003)" (2005, italics mine). It is to these that we now turn.

Neural pre-requisites for consciousness

(I would like to acknowledge a special debt of gratitude here to Dr. Jaak Panksepp, Dr. James Rose and Dr. David Edelman, for their patience in answering my queries. Any errors here are entirely my own.)

The major divisions of the brain. Diagram courtesy of Dr Anthony Walsh, Chairman, Department of Psychology, Salve Regina University, Rhode Island.
Note: the term "brain stem" is used to denote the diencephalon (hypothalamus and thalamus), mesencephalon (mid-brain) and rhombencephalon (hind-brain).
For an overview of the functions of the different parts of the brain, click here.

The reticular activating system (RAS) comprises parts of the medulla oblongata, the pons and midbrain and receives input from the body's senses - excluding smell. When the parts of the RAS are active, nerve impulses pass upward to widespread areas of the cerebral cortex, both directly and via the thalamus, effecting a generalised increase in cortical activity associated with waking or consciousness. Image courtesy of Dr. Rosemary Boon, founder of Learning Discoveries Psychological Services.

Table 4.5 - Key neurological features of primary consciousness
What are the distinguishing neural properties of primary consciousness?

According to Dr. David Edelman (a neurologist at The Neurosciences Institute, San Diego, and the son of Professor Gerald Edelman), there are three major properties of consciousness that are fairly well accepted by neurobiologists (personal email, 19 July 2004):

  • (1) Waking consciousness is associated with low-level irregular activity in raw EEG recordings, ranging from 20 to 70 Hz.

    During deep sleep, in persistent vegetative states, under anesthesia, and during epileptic absence seizures, EEG recordings show slow, high-amplitude, regular activity of the order of less than 4 Hz. This seems to be the case in the brains of all mammals from which such recordings have been made (Edelman, personal communication, 19 July 2004).

  • (2) Consciousness requires a thalamus, a cortex and recursive (or reentrant) pathways between the two. The thalamus is a switching centre which functions as a "doorway" (Roth, 2003, p. 35) to the cerebral cortex - the brain's outer layer. A third region, the basal ganglia (situated deep in the forebrain) is also involved: consciousness "almost certainly" requires complex and recursive pathways between regions of the cortex, the thalamus, and the basal ganglia (Edelman, personal email, 19 July 2004). Profound damage to other regions of the brain (e.g. the cerebellum) or even the removal of an entire hemisphere of the cerebral cortex will not destroy consciousness, but damage to the thalamus can do so. Interactions between the cortex and thalamus determine the form that consciousness takes.

  • (3) Consciousness activates disparate regions of the brain's cortex, which appears to spread from the sensory cortex to parietal, prefrontal and medial-temporal areas (see illustration below), whereas input that we are not consciously aware of remains confined to localised regions of the sensory cortex. As novel, conscious tasks turn into automatic and unconscious skills with practice, activity in the cortex becomes less widespread and more focal.

What are the different kinds of conditions for consciousness?

Terminological clarification:

It is important to distinguish the general, enabling factors in the brain that are needed for any form of consciousness to occur from modulating ones that can up- or down-regulate the level of arousal, attention and awareness and from the specific factors responsible for a particular content of consciousness (Koch and Crick, 2001).

Koch and Crick (2001) stress that enabling factors do not correspond to conscious states as such: if they did, one would have to say that consciousness resided in the heart, since it rapidly ceases if the heart stops beating.

  • Consciousness hinges on neural activities occurring within the thalamocortical system (Tononi, 2004). Within the thalamus, the intralaminar nuclei can be described as enabling factors: acute bilateral loss of function in these small structures leads to immediate coma or profound disruption in arousal and consciousness (Koch and Crick, 2001).

  • Among the brain's neuronal modulating factors is the reticular activating system (RAS), whose activities, which occur in nuclei within the brain stem and the midbrain, control the level of neurotransmitters in the thalamus and forebrain. Appropriate levels are needed for sleep, arousal, attention, memory and other functions critical to consciousness (Koch and Crick, 2001).

  • Neuroscientists still have little understanding of why activity in specific areas of the cortex (see illustration below) generates different sensory modalities - e.g. why the auditory and visual cortex are associated with sound and colour respectively (Tononi, 2004).

Which parts of the brain are required for primary consciousness?
  • In human beings, a thalamus and reticular activating system are necessary but not sufficient conditions for primary consciousness, as shown by the fact that we are unaware of neural activity that is confined to the brainstem:

    [A] large part of the activity occurring in our brain is unavailable to our conscious awareness (Dolan, 2000; Edelman and Tononi, 2000; Koch and Crick, 2000; Libet, 1999; Merikel and Daneman, 2000). This is true of some types of cortical activity and is true for all brainstem and spinal cord activity (Rose, 2002, p. 15).

    Note: the brain stem includes the thalamus and hypothalamus, mid-brain, pons, cerebellum, medulla oblongata and spinal cord.

  • Only when neural activity reaches the cerebral cortex - the extensive outer layer of grey matter in the brain's cerebral hemispheres - does it translate into conscious awareness. This region of the brain is believed to be largely responsible for sensation, voluntary muscle movement, thought, reasoning, and memory.

    Destruction of the cerebral cortex leaves a human being in a persistent vegetative state, capable of behavioural wakefulness (e.g. eyes are open) but devoid of all conscious awareness. PVS patients are still capable of stereotypical responses to noxious stimuli (Rose, 2002, p. 13). Non-primate mammals whose cerebral hemispheres have been destroyed are capable of locomotion, postural orientation, elements of mating behavior, and fully developed behavioral reactions to noxious stimuli, but cannot survive without assisted feeding (Rose, 2002, p. 13).

Which structures in the cerebral cortex are required for primary consciousness?
  • The cerebral cortex is mostly made up of a six-cell-layered neocortex, technically known as isocortex. It is this laminated structure that supports consciousness in human beings:

    Extensive evidence demonstrates that our capacity for conscious awareness of our experiences and of our own existence depends on the functions of this expansive, specialized neocortex. This evidence has come from diverse sources such as clinical neuropsychology (Kolb and Whishaw, 1995), neurology (Young et al., 1998; Laureys et al., 1999, 2000a-c), neurosurgery (Kihlstrom et al., 1999), functional brain imaging (Dolan, 2000; Laureys et al., 1999, 2000a-c), electrophysiology (Libet, 1999) and cognitive neuroscience (Guzeldere et al., 2000; Merikle and Daneman, 2000; Preuss, 2000).

    We are unaware of the perpetual neural activity that is confined to subcortical regions of the central nervous system, including cerebral regions beneath the neocortex as well as the brainstem and spinal cord (Dolan, 2000; Guzeldere et al., 2000; Jouvet, 1969; Kihlstrom et al., 1999; Treede et al., 1999) (Rose, 2002, p. 6, italics mine).

  • Human consciousness appears to require brain activity that is diverse, temporally conditioned and of high informational complexity. (This integrative requirement corresponds to the third major property of consciousness listed by David Edelman.) The neocortex satisfies these criteria because it has two unique structural features:

    (1) exceptionally high connectivity within the neocortex and between the cortex and thalamus;

    (2) enough mass and local functional specialisation to permit regionally specialised, differentiated activity patterns (Rose, 2002, p. 7, italics mine).

Which parts of the neocortex are required for primary consciousness?
Divisions of the cerebral cortex. Diagram courtesy of Dr. Gleb Belov, Department of Mathematics, Technical University of Dresden, Germany.
  • The neocortex is divided into primary and secondary regions (which process low-level sensory information and handle motor functions), and the associative regions. Brain monitoring techniques indicate that in human beings, only processes that take place within the associative regions of the cortex are accompanied by consciousness; activities which are confined to the primary sensory cortex or processed outside the cortex are inaccessible to consciousness (Roth, 2003, pp. 36, 38; Rose, 2002, p. 15). Consciousness thus depends on the functions of the association cortex, not primary cortex. The associative regions are distinguished by their high level of integration and large number of connections with other regions of the brain (Roth, 2003, p. 38) - corresponding to Edelman's third major property of consciousness.

  • It is now believed that slow-wave sleep, coma and PVS cause a loss of primary (and phenomenal) consciousness precisely because in these states, the ability to integrate information between different regions of the cerebral cortex is greatly reduced (Tononi, 2004; Baars, 2003).

Which animals satisfy the criteria for primary consciousness?
  • To a limited degree, all vertebrates possess the key structures which figure in Edelman's (2004) second distinguishing property of consciousness described above. The major subdivisions of the brain - spinal cord, hindbrain, midbrain, diencephalon, telelcephalon - are found in all vertebrates. The thalamus is also present. All vertebrate brains have a forebrain pallium, known as the cerebral cortex in mammals (Prescott, 1999, p. 9).

  • To date, the reentrant interactions between thalamus, cortex and basal ganglia which characterise consciousness have only been found in mammals, but they may also occur in other vertebrates such as birds. More research needs to be done (Edelman, personal email, 19 July 2004).

  • Regarding Edelman's first distinguishing property of consciousness: while behavioural sleep is found in most animals, true brain sleep (which is distinguished from brain wakefulness by its EEG patterns) is confined to mammals and birds. According to Baars (2001), "all mammalian species studied so far show the same massive contrast in the electrical activity between waking and deep sleep". Birds' waking EEG patterns resemble those of mammals; and their sleep patterns are very similar to those of mammals, except that REM sleep is shorter (Kavanau, 1997, p. 257; Cartmill, 2000; Edelman, personal email, 19 July 2004). EEG patterns in sleeping reptiles show arrhythmic spiking that resembles non-REM sleep, but lacks the slow-wave patterns that characterise sleep in mammals and birds. In reptiles, sleep is regulated by the limbic system instead of the cerebrum (Kavanau, 1997; Backer, 1998).

  • Although some mammals have much more neocortex in proportion to their body size than others, which probably explains the wide variation in different species' performance in problem-solving tasks, "the size of the neocortex seems[s] to be irrelevant to the existence of wakefulness and perceptual consciousness" among mammals (Baars, 2001).

  • Among animals, only mammals possess a true neocortex (Rose, 2002, p. 10). Some authors have claimed that reptiles and birds have a primordial neocortex, but it does not have the layered structure found in the brains of mammals. Thus it is generally agreed that a fully developed neocortex is present only in mammals (Nieuwenhuys, 1998). Specifically, reptiles and birds do not appear to possess any brain structures possessing the special features of the association cortex - a high level of integration and a large number of connections with other regions of the brain.

  • The cerebellum is the only structure in the brains of non-mammals lack structures with a comparable ability to rapidly integrate diverse kinds of information. Interestingly, the cerebellum, located at the back of the brain, "contains probably more neurons and just as many connections as the cerebral cortex, receives mapped inputs from the environment, and controls several outputs", and yet "lesions or ablations indicate that the direct contribution of the cerebellum to conscious experience is minimal" (Tononi, 2004), and "removal of the cerebelleum does not severely compromise consciousness" (Panksepp, 1998, p. 311). The reason why activity in the cerebellum is not associated with consciousness is thought to be because different regions of the cerebellum tend to operate independently of one another, with little integration of information between regions (Tononi, 2004).

  • In the light of the above evidence, many authors are disposed to deny non-mammals any kind of conscious awareness (Edelman and Tononi, 2000; Rose, 2002, cites supporting authorities).

  • On the other hand, the dorsal ventricular ridge (DVR) in reptiles and birds serves as a principal integratory centre and exhibits a pattern of auditory and visual connections with sensory centres and the thalamus which is broadly similar to that of the sensory neocortex in mammals. Fish and amphibians lack this structure (Russell, 1999; Aboitiz, Morales and Montiel, 2000).

  • The ventricular ridges of birds are well-developed, but not laminated (Kavanau, 1997, p. 258).

    However, even though largely non-laminated, the avian telencephalon [anterior forebrain - V.T.] can generate visual performances of a complexity rivaling and even exceeding those of mammals, previously thought to have been correlated uniquely with cortical lamination... The mechanisms of visual information processing in the brains of birds are ... at least as efficient as those in the mammalian striate cortex (Kavanau, 1997, p. 257).

  • The dorsal ventricular ridge in birds and reptiles should not be regarded as homologous to the mammalian neocortex; instead, its should be viewed as analogous in its causal role in regulating animal behaviour. (Homologous structures are those which have originated from the same structure in a common ancestor; analogous structures play a similar functional role.) Experts continue to disagree as to which part of the reptilian and avian telencephalon [the anterior division of the forebrain, which includes the cerebrum]correspond to the neocortex. Currently, there is no single criterion by which homology between structures can be established. Commonly used criteria include: identical patterns of connectivity to other brain parts, neurochemistry; and embryonic orgins. However, these approaches yield inconsistent results (Aboitiz, Morales and Montiel, 2000).

  • In birds, the dorsal ventricular ridge includes two areas: the hyperstriatum ventrale and neostriatum (Medina, 2002). There is good evidence that the mammalian neocortex and the neostriatum-hyperstriatum ventrale complex in birds have similar integrative roles. Interestingly, the relative size of the hyperstriatum ventrale in different species of birds is the best predictor of their feeding innovation rate, which is regarded as an indicator of cognition (Timmermans et al., 2000).

    Tool-making ability in different bird species has also been shown to correlate with the size of their neo- and hyper-striatum ventrale (Chappell and Kacelnik, 2004). The neostriatum caudolaterale is a structure believed to correspond to the frontal cortex in mammals, which is involved in planning of movement (Lissek and Gunturkun, 2003).

Some neurologists (Panksepp, 1998, 2001, 2003f; Denton et al., 1996; Denton et al., 1999; Egan et al., 2003; Liotti et al., 2001; Parsons et al., 2000; Parsons et al., 2001) question the current neurological consensus and argue that conscious feelings may occur outside the cerebral cortex. Perhaps their most interesting evidence comes from studies of hydranencephalic children (who have little or no cerebral cortex) and decorticate animals (whose cerebral cortex has been removed). After carefully examining their arguments in the Appendix, I conclude that:

(i) for animals whose cerebral cortex was removed during infancy, the trauma of decerebration may have affected the neural development of their brain stem, effectively "corticising" it so that some parts were able to take over some of the functionality normally handled by a cerebral cortex (vertical plasticity);

(ii) while there appears to be a real distinction between an affective consciousness (centred in the anterior cingulate, which borders the cerebral cortex), and a cognitive consciousness (centred in the cerebral cortex), it is inaccurate to describe the former as completely non-cognitive, as it still involves crude, low-level processing of sensory inputs and hence minimal cognition;

(iii) the evidence for feelings in mammals completely lacking both a cerebral cortex and an anterior cingulate cortex is doubtful;

(iv) in any case, since the anterior cingulate has a complex layered structure and is not found in birds or reptiles, it does not help the case for feelings in non-mammals.

Where does that leave us? While similarity arguments can be used to make a strong cumulative case that conscious feelings are widespread among mammals, the massive dissimilarities between the neocortex of the mammalian brain and the much more primitive structures in the brains of birds and reptiles effectively undermine any arguments for conscious feelings in these animals that are based on "similarity" alone.

As we cannot yet identify structures in the brains of birds that are homologous to the mammalian neocortex, any arguments for consciousness in non-mammals must therefore be based on certain structures which play an analogous causal role in regulating their behaviour, which equals or surpasses that of mammals in cognitive sophistication, as I argue in the Appendix.

Because birds meet all the other neural requirements for consciousness and equal mammals in behavioural sophistication, I conclude that birds are probably phenomenally conscious. (The case for reptiles is far weaker, for reasons I discuss in the Appendix.)

Because the brains of all vertebrates are built according to the same basic pattern (Rose, 2002), I argue that fish and amphibians are not phenomenally conscious, on account of the massive neural and behavioural disanalogies between these vertebrates and conscious mammals. I describe these differences in the Appendix.

For invertebrates, whose brains are too unlike those of mammals to permit even a functional comparison of their brains with ours, an inferential approach is required to ascertain whether they have conscious feelings: we need to identify behaviour on their part that cannot be plausibly explained except in terms of phenomenally conscious states. Edelman, Baars and Seth (2005) make some useful suggestions regarding future neurophysiological and behavioural research with these creatures.

Evidence from studies of animal pain and hedonic behaviour lend support to the conclusion that phenomenal consciousness is confined to animals that have passed a certain neurological threshold. I discuss the evidence in detail in the Appendix, where I show that while the fundamental behavioural reactions to injurious stimuli (found in nearly all animals) are regulated at levels of the brain below the level of consciousness, the cognitive-evaluative components of pain (attention to the pain, perceived threat to the individual, and conscious generation of strategies for dealing with the pain), as well as the emotional unpleasantness (suffering) aspect of pain are controlled by activity in the cortex - specifically, the anterior cingulate gyrus, prefrontal cortex, and supplementary motor area (Rose, 2002, pp. 19-21). These areas are only known to occur in mammals, although birds are thought to possess analogous structures (Edelman, Baars and Seth, 2005). Likewise, the hedonic behaviour of vertebrate animals (Dawkins, 1994; Cabanac, 1999, 2003) is confined to reptiles, birds and mammals; amphibians do not exhibit it (Cabanac, 2003).


1(c) Ethical implications of animal consciousness

So far, our investigation points to at least three distinct senses in which interests can be ascribed to creatures:

For some philosophers, a capacity for phenomenal consciousness is regarded as a sine qua non for having interests and being morally relevant. However, the above summary suggests that the ethical divide between mindless organisms and animals with minimal minds is a greater one than that between animals with minimal minds and phenomenally conscious animals, and the division between the simplest organisms and assemblages lacking intrinsic finality is greater still. Animals' interests, whether conscious or not, can be measured and which can be harmed by our actions. In the Appendix, I provide specific examples of how the welfare of fish (who lack phenomenal consciousness) can be measured using specific indices, and of how it can be harmed by practices such as aquaculture and angling.

Of course, we have a strong prima facie duty to refrain from treating phenomenally conscious animals cruelly, and the duty (under more restricted circumstances) to be kind to them. For companion animals, that would entail befriending them. Logically, any animals that lacked phenomenal consciousness (such as goldfish) could not serve as true "companions".


2. Evidence for rational agency in animals

Kacelnik (2004) points out that the concept of rationality differs between psychology, philosophy, economics and biology. For psychologists and philosophers, the emphasis is on the process by which decisions are made: rational beliefs are arrived at by reasoning and contrasted with beliefs arrived at by emotion, faith, authority or arbitrary choice. Economists emphasise consistency of choice, regardless of the process and the goal. Behaviour is consistent if it maximises some function that is called "utility". For biologists, rationality is the consistent maximisation of inclusive fitness across a set ofrelevant circumstances. My concern here is with the philosophical usage of the term. My aim in this section is a narrow one: to assess the merits of what I consider to be the best philosophical argument against the possibility of rationality in non-human animals, which is discussed at length in Leahy (1994, pp. 154-156) and formulated by Kenny (1975).

Kenny (1975) quotes Aquinas in support of his claim that animals are incapable of acting for reasons:

Perfect knowledge of an end involves not merely the apprehension of the object which is the end, but an awareness of it precisely qua end, and of the relationship to it of the means which are directed to it. Such a knowledge is within the competence only of a rational creature. Imperfect knowledge of the end is mere apprehension of the end without any awareness of its nature as an end or of the relationship of the activity to the end. This type of knowledge is found only in dumb animals (quoted in Kenny, 1975, p. 19).
Kenny elaborates:

When an animal does X in order to do Y, he does not do X for a reason, even though he is aiming at a goal in doing so. Why not? Because an animal, lacking a language, cannot give a reason... It is only those beings who have the ability to give reasons who have the ability to act for reasons (1975, p. 20).


Rico has a 200-word receptive vocabulary and can learn and remember the names of unfamiliar toys after just one encounter. However, behavioural ecologist Alex Kacelnik argues that "Rico probably has the general ability to connect things - not a language ability".
Image courtesy of Susanne Raus and Nature Publishing Group.

Before I comment on Kenny's argument, I would like to point out that if Kenny is right, we can safely say that no non-human animals are capable of rational agency. Recent studies of animals such as parrots (Pepperberg, 2002), dogs (Pilcher, 2004), chimpanzees (Savage-Rumbaugh et al., 1998) and dolphins (Herman, 2002) have certainly shown that these creatures possess a large receptive vocabulary containing hundreds of words. They can also acquire new concepts relating to colors, shapes and quantities (Pepperberg, 2002), and grasp the meaning of complex vocal and gestural commands (Savage-Rumbaugh et al., 1998; Herman, 2002) as well as novel commands (Herman, 2002) and even commands containing unfamiliar words (Pilcher, 2004).

On the other hand, the productive language skills of non-human animals appear to be very limited. The number of signals in their vocal repertoire is small and is restricted to objects experienced in the present, with no evidence of creative production of new sounds for novel situations. Moreover, there is no evidence that animals' vocal calling takes into account what other individuals believe or want (Hauser, Chomsky and Fitch, 2002, p. 1576). Finally, animal calls in the wild seem to have no analogue of names, semantics or syntax (Budiansky, 1998a, pp. 131 - 160; Budiansky, 1998b, pp. 105 - 106). We may therefore assume that no non-human animal is capable of justifying its actions.

A central feature of Aquinas' and Kenny's account of rationality is that it pertains to means and ends, like the intentional agency I described in chapter two - the difference being that a rational agent, according to Kenny, is able to explain why it is doing what it does.

While I endorse Aquinas' distinction between perfect and imperfect knowledge, I disagree with Kenny's assertion that an animal has to be able to give a reason for its actions in order to display perfect knowledge of its end. I propose that the perfect knowledge required for rational agency is capable of being exhibited in the agent's self-critical behaviour, if she continually refines her behaviour in such a way as to achieve a remote goal in the most efficient way. By "remote", I mean that:

(i) the agent's behaviour does not immediately realise the goal; and

(ii) unlike the four kinds of intentional agent we discussed in chapter two, the agent's behaviour is not driven by sensory feedback from the goal itself or anything that the agent may have learned to associate with the goal.

A rational agent, then, cannot simply "steer herself home" by following a minimal map: instead, she has to continually keep in mind her abstract concept of her goal, and adjust her behaviour accordingly, in order to achieve the goal in the way that seems best to her.

If my proposal is correct, there is no reason to believe that an agent a capacity for self-critical behaviour should possess any faculty of language, in the human-like sense described by Hauser, Chomsky and Fitch (2002). The agent might therefore be unable to "give a reason" as Kenny (1975) requires. We should not therefore infer that "dumb brutes" are irrational.

Case study: rational agency in crows


From the hook/wire experiment, this photo shows Betty the crow retrieving the bucket containing meat from a well, with a wire she has just bent. Image courtesy of Behavioural Ecology Research Group.

Perhaps the most impressive example to date of rational agency in non-human animals is that of a crow named Betty, who repeatedly displayed the ability to take a straight piece of wire, craft it into a hook with her beak, and use it to snag a piece of meat in a tube (Weir, Chappell and Kacelnik, 2002; Kacelnik, 2004). She had seen and used supplied wire hooks before but had not seen the process of bending. The crow even used different methods to fashion the hooks on different occasions. The method used by the crow was different from those previously reported and would be unlikely to work with natural materials. She had little exposure to and no prior training with pliant material, and had never been observed to perform similar actions with either pliant or non-pliant objects. The crow's ingenuity appears to surpass anything observed to date in chimpanzees.

Kacelnik (2004, p. 34) warns against making too much of his findings:

The attribution of any form of rationality cannot be based on one set ofobservations, however compelling this set may be. We do not know how domain-general the New Caledonian crows' ability to plan and execute solutions to new problems is.

Nevertheless, Betty the crow's action of bending the hook appears to be a perfect example of what I would call "fine-tuned pursuit of a remote (long-range) goal". In this case, the goal (meat) was a distant one, and the crow had to continually modulate her goal-directed activities, so as to achieve the ideal shape for snagging the meat. The crow's behaviour seems to have instantiated what Aquinas referred to as knowledge of the end qua end.

I would, however, agree with Budiansky (1998, pp. 122-128) that the vast majority of instances of tool use observed in animals show no genuine appreciation of the relationship between a means and an end. Capuchin monkeys in the wild, for instance, use sticks to kill snakes, hit other monkeys, and dig for food, but laboratory tests show that when given a choice of sticks for removing a peanut from a clear Plexiglas tube, they show no insight as to how to use the sticks. Most tool use by animals is either stereotypical (as in the case of the digger wasp), or can be regarded as an extension of their basic instincts (e.g. birds use their beaks to perform a variety of tasks, so when they use a stick as a "tool" it is almost invariably used as an extension of its beak). Viewed against this background, the behaviour of Betty the crow seems all the more remarkable.

Can rationality be domain-specific?

Kacelnik (2004, p. 34) raises the question of whether rational agency could be domain-specific, pointing out that even for humans there is no such a thing as totally domain-independent reasoning abilities. On the account I am proposing, an individual's capacity for rational agency is limted by the range of concepts she can form. While non-human animals appear to possess bona fide concepts (Young and Wassermann, 2001), all but a few of them appear to lack any concept of "self", and there is no conclusive evidence to date that they possess a "theory of mind" which would allow them to attribute beliefs and desires to others. It should therefore occasion no surprise if some animals prove to be capable of rational agency in the context of tool-making, but not in a social context. Equally, it may well be the case that chimpanzees, whose tool use is apparently less sophisticated than that of crows, are nonetheless capable of rational agency in a social context. Currie (2004) argues that pretence is one clear indication of rationality, and makes a suggestion about the kind of evidence that would justify its ascription to non-human primates.

The reasons, I suggest, why the domain human rational agency appears to be so general in its scope are that: (i) humans have an advanced theory-of-mind (giving them a range of social concepts inaccessible to other animals); and (ii) humans, unlike other animals, can generate an infinite number of sentences from a finite number of elements.

Does following a logical rule indicate rational agency?

Despite studies cited in the literature (see Hurley and Nudds, 2004) of animals appearing to follow certain logical rules (e.g. exclusion rules, transitivity) which characterise elementary reasoning, I would argue that behaviour in non-linguistic animals that conforms to logical rules is insufficient to warrant the ascription of rational agency, as it may simply be the outcome of one of the other forms of rationality described by Kacelnik (2004), rather than genuine reasoning. For instance, many species of nonhuman animals appear to engage in transitive inference, producing appropriate responses to novel pairings of non-adjacent members of an ordered series without previous experience of these pairings, but some researchers continue to favour less "cognitive" explanations in terms of associative conditioning (see Allen, 2004).

To sum up: at present, we do not know if any non-human animals are capable of rational agency. The tool-making behaviour of some birds and the social behaviour of the great apes offer promising avenues for investigation.


3. Moral agency in animals?

De Waal (1996) defends the notion that non-human animals are moral agents, in a weak sense. His central claim is that the following four key ingredients of morality can be found within the animal kingdom, especially in primate societies:

(i) sympathy-related traits, including nurturance (care of one's own offspring), succorant behaviour (common in mammals: caring for individuals other than one's progeny, being emotionally affected by their suffering, and learning to adjust to their needs), and in the case of the great apes, cognitive empathy (the ability to understand other individuals' suffering by putting onself in their position, and extrapolating what they would be able to do);

(ii) norm-related characteristics such as: active inculcation of rules by parents, grading of punishments depending on the age of the individual, the tailoring of instructions to the learner's level of experience; and conflict mediation by leaders;

(iii) reciprocity, manifested in behaviour such as reciprocal altruism and "tit-for-tat" co-operation strategies; and

(iv) the ability of animals to get along with each other by reining in their aggressive responses.

The question of whether other animals possess a primitive "theory of mind" that would allow them to be aware of others' beliefs or intentions, remains controversial (Horowitz, 2002; Nissani, 2004; Emery and Clayton, 2004), as does the question of whether a self-concept is a requirement for feeling sympathy (Gruen, 2002).

I maintain that there are several important ways in which the social behaviour of non-human animals falls short of even the most basic definition of morality.

Absence of moral norms

First, we cannot speak of morality in non-human animals unless parents can transmit moral norms to their offspring. In speaking of norms, I do not wish to commit myself to the controversial view that following moral norms defines what it is to act morally; rather, I am simply asserting that we do in fact often follow such norms when trying to do the right thing.

At first glance, the transmission of moral norms in animals seems unproblematic: animals transmit social rules to juvenile members of their groups (De Waal, 1996), and chimpanzee mothers transmit tool-using techniques to their offspring, including "traditional" techniques that are specific to the group to which its belongs. Young chimpanzees acquire this knowledge through observation (Matsuzawa, 2002).

Neither of these forms of information transmission suffices to explain the instillation of moral norms. First, I would suggest that the juvenile animals described by De Waal (1996) who learned to conform to the "norms" of their group may not have been following a rule in the true sense of the word, but simply avoiding unpleasant consequences. In the case study in chapter two, where a fly learned to adjust its yaw torque to escape a heat beam, we did not speak of it as "following a rule". The only significant difference between the fly's avoidance behaviour and that of a social animal conforming to group "norms" is that in the case of the social animal, the adverse consequences are enforced by the other animals in its group. There is no need to suppose that these animals view the individual they "punish" as a rule-breaker. Instead, their "punishment" may simply be motivated by an innate or acquired dislike of the individual's behaviour, or a learned association between the individual's behaviour and some bad consequence for the group.

Following a rule is a much more sophisticated behaviour than avoiding a bad consequence, even in its most "primitive" forms, where people observe the norm only because they are afraid of getting caught. The activity of following a rule takes place against the backdrop of a society in which rules are enforced by other agents. A rule-follower understands that what causes the bad consequence is not the offending act itself, but the rule-enforcer's discovery of the act, coupled with her attribution of it to the offender - which is why criminals often try to cover up evidence of their deeds, and why a criminal accused of committing a crime may lie or blame someone else. The act of following a rule therefore requires an individual to possess a human-like theory-of-mind, and be able to attribute to other individuals not only beliefs, but mistaken beliefs about other agents. At the time of writing, evidence for this capacity in non-human animals remains inconclusive at best (Hauser, Chomsky and Fitch, 2002; Emery and Clayton, 2004).

The second mode of information transmission (observational learning) falls down for the same reason. One can acquire a technique through observation, but an ability to learn in this way is not a sufficient condition for being able to attribute the kind of beliefs to others that are required for following a rule.

Incapacity for self-improvement

Of all the metaphors we use to describe morality, perhaps the most ubiquitous is that of the path. Buddhists talk of an eight-fold path; Taoists talk of "the Way"; and in our own culture, the metaphor of "staying on the straight and narrow" is a familiar one. The insight behind this metaphor is a rich one: a moral agent must be capable of improving her conduct over the entire course of her life. To do this, she must possess an extraordinarily "thick" concept of time: she has to be able to recall her past actions in a temporal sequence, looking for signs of either improvement or back-sliding, and formulate resolutions to improve her conduct in the future. An individual that lacked the ability to reflect on her past and future life would be morally paralysed, unable to diagnose her character faults or resolve to rectify them. In other words, moral agency requires not only an episodic memory, which some birds may possess in a rudimentary form (Emery and Clayton, 2004; but see Tulving, 2002), but an autobiographical memory, which makes "mental time-travel" possible. Autobiographical memory is generally acknowledged to be a human specialty (Tulving, E. 2002. "Episodic memory and common sense: how far apart?" In Episodic Memory: New Directions in Research. Edited by Baddeley A., Conway M. and Aggleton J. New York: Oxford University Press. pp. 269-288). Most other domains in which one can perfect one's abilities (e.g. motor skills) do not require such a rich form of memory.

Inability to cultivate dispositions and attitudes

It is commonly acknowledged by moralists that the mere performance of a good act do not make an agent virtuous. As Hursthouse (2003) puts it:

A virtue such as honesty or generosity is not just a tendency to do what is honest or generous, nor is it to be helpfully specified as a "desirable" or "morally valuable" character trait. It is, indeed a character trait - that is, a disposition which is well entrenched in its possessor, something that, as we say "goes all the way down", unlike a habit such as being a tea-drinker - but the disposition in question, far from being a single track disposition to do honest actions, or even honest actions for certain reasons, is multi-track. It is concerned with many other actions as well, with emotions and emotional reactions, choices, values, desires, perceptions, attitudes, interests, expectations and sensibilities. To possess a virtue is to be a certain sort of person with a certain complex mindset. (Hence the extreme recklessness of attributing a virtue on the basis of a single action.) (Hursthouse, Rosalind, "Virtue Ethics", The Stanford Encyclopedia of Philosophy (Fall 2003 Edition), Edward N. Zalta (ed.), Web address: http://plato.stanford.edu/archives/fall2003/entries/ethics-virtue/.)

I would suggest that part of the reason why we feel a residual inclination to ascribe moral agency to animals is that they possess temperamental traits (e.g. being irascible, or placid) which superficially resemble the dispositions that we acquire in the course of our moral education. But as Hursthouse notes, virtues presuppose the existence of a complex mind-set which non-human animals lack.

An important part of this mind-set consists of the attitudes towards morally significant individuals which we inculcate in the young during the process of moral education. (It would be nonsensical to attribute these attitudes to non-human animals: a dog may properly be described as irascible, but cannot be meaningfully criticised for having a bad attitude towards human beings.) The simple injunction to "love people, use things" not only enjoins us to be kind to others, but defines the underlying attitude that should govern our conduct towards them. An example by Midgley (1984) of how parents typically instil these attitudes illustrates perfectly it would be impossible for non-human animals to do so. The case Midgley considers is that of parents who find their small children tormenting animals:

We say, "you wouldn't like that done to you", and I do not think that this is a Father Christmas case of deliberate deception. We mean it (1984, p. 91).

The parent's attempt to get her child to put himself in another animal's shoes can only be transacted in the currency of language. By "language" I mean the faculty defined in the narrow sense described by Hauser, Chomsky and Fitch (2002), rather than the very broad sense in which animals can be said to possess it. To grasp this point, consider the example of a well-fed kitten catching a mouse which it does not eat, and playing with the captured mouse, despite its desperate efforts to escape. Even if we could imagine that the kitten's mother regarded her offspring's behaviour as morally abhorrent, how could she possibly inculcate such an attitude in her offspring? To inculcate attitudes, one needs to be able to generate the indefinite variety of sentences that may be required to justify moral norms and persuade one's children to change their behaviour - in other words, a recursion mechanism.

If the foregoing arguments are correct, then we are unlikely to ever discover instances of moral agency in non-human animals. At a minimum, moral agency requires a capacity to ascribe mistaken beliefs to others, a very "thick" concept of time (an autobiographical memory extending over the agent's past, present and future), and the possession of a recursion mechanism, providing the capacity to generate the large range of sentences that may be required to inculcate moral attitudes.