Introduction and Methodology

Back to main page Next chapter References

What this thesis is about

The title of this thesis is "Animals and other organisms: their interests, mental capacities and moral entitlements".

My research for this thesis has convinced me that not only the mental capacities, but also the interests of living creatures (which constitute the basis of any rights they may have or any duties we may have towards them) are very much a function of their biology. Accordingly, the first chapter of my thesis deals with two issues: what it means for something to be alive, and whether we can legitimately impute interests to an organism, simply because it is alive.

A large part of my thesis is devoted to a discussion of animal minds. Some of the outstanding questions that continue to be debated in this area include the following:

(a) what requirements does an animal have to satisfy in order for it to be credited with having some sort of mind, however rudimentary?
(b) what are the requirements for sentience - or, more precisely, phenomenal consciousness - and which animals satisfy them?
(c) what capacities does an animal need to possess before it can be credited with higher-order mental states? and
(d) what requirements would an animal need to satisfy before we could call it a moral agent?

Each question merits a thesis in itself, and I shall attempt here a comprehensive response to the first question only. The second chapter of my thesis contains a detailed description of what I call a "minimal mind" - the simplest kind of mind that could exist. The question of what kinds of emotions a "minimal mind" could be said to possess is discussed in chapter three.

The first part of the fourth chapter is devoted to the topic of consciousness. I present a short summary of scientific findings that are relevant to animals, before proposing some new philosophical terms for different kinds of animal consciousness which (I believe) reflect divisions occurring in the natural world more accurately than present terms do.

I discuss so-called higher-order mental states in the second part of chapter four, and suggest that animal research in this area is still in its infancy, so any findings must be regarded as extremely tentative. Finally, I argue that it is unlikely that any non-human animals qualify as moral agents.

Chapters five and six of my thesis deal with the moral entitlements of animals and other organisms. A philosophical discussion of "entitlements" can be couched in terms of "rights", but other terminologies are available too. Although I will be reviewing different theories of rights, the ethical component of my thesis will be principally focused on: what grounds the interests that animals and other living things have; how these interests can entail obligations on our part towards animals and other organisms, corresponding to moral entitlements on their part; and finally, what human beings are morally entitled to do to animals and other life-forms, in order to advance their own good.

Why write about living things?

I had originally planned to write exclusively about animals, but the recent outbreak of a philosophical crisis in two areas - philosophical biology and environmental ethics - prompted me to broaden the scope of my thesis to include all organisms. Two books, in particular, had a profound influence on my thinking.

First, the recent publication of Gary Varner's book "In Nature's Interest?" (1998) convinced me that the ethical divide drawn by some philosophers between sentient beings (which are said to be morally significant in their own right) and non-sentient beings (said to possess no moral significance whatsoever) was far too simplistic, and that the ascription of interests to living things, simply because they are alive, was philosophically legitimate. Varner also addressed the question of what it means to be alive, and how the interests of organisms (e.g. a plant, which thrives on sunlight) can be differentiated from the mere needs of non-living artifacts such as cars (which requires fuel to run). Varner drew upon concepts in Darwinian biology, arguing that living things contain organs whose functions evolved because they conferred a survival advantage on their possessor. Hence living things can be properly said to have a "good of their own".

Meanwhile, Steve Wolfram's ground-breaking work, "A New Kind of Science" (2002) alerted me to a major philosophical crisis regarding the definition of life. Recent developments in computer science have led to the emergence of what has been called "artificial life". For some philosophers and scientists, these developments call into question the distinction between living and non-living things. Wolfram (2002) contends that all of the general behavioural characteristics of living things - for example, self-movement, responsiveness to stimuli, nutrition and self-reproduction - can be mimicked by a variety of computational systems - even systems with very simple rules. Another general feature commonly thought to distinguish living things is their complexity. However, Wolfram demonstrates that virtually all systems that are composed of interacting components - including the biological systems we call organisms - are in fact equally complex, if we measure a system's complexity by its computing power. Even the capacity to evolve, proposed by Varner and others as a distinguishing property of life, is not unique to living things: some artificial life-forms possess this capacity too. Wolfram concludes that it is specific features, rather than general properties, that distinguish what most people would call "living things" from other entities. These specific features include arbitrary structural properties such as "having components [like] proteins, enzymes, cell membranes and so on", and also peculiar chemical properties such as "being based on ... water, sugars, ATP and DNA" (2002, p. 825).

If Wolfram is right, then centuries of thinking in the field of philosophical biology are rendered obsolete, as his arguments undermine the distinction commonly drawn between living and non-living systems. Indeed, it would be perfectly legitimate to say that any natural or artificial computational system whose complexity - as measured by its problem-solving ability - matches that of living things is in some way alive. In effect, this means that most machines and most natural systems are alive. This is in fact Wolfram's preferred position, which he refers to as "neo-animism".

Varner's and Wolfram's philosophical arguments can be combined to generate a parallel crisis in the field of environmental ethics. For if, as Varner argues, all living things have interests, then Wolfram's arguments imply that machines and most natural systems - such as the wind or the flames of a fire - have interests too.

Many people - including myself - would regard the ascription of interests to such things as car engines or the wind as an ethical reductio ad absurdum. Accordingly, one of the main aims of this thesis is to formulate a much narrower, more focused definition of what it means to be "alive", which is scientifically and philosophically plausible, as well as ethically relevant.

Why write about animals?

I decided to write about animals' mental capacities, interests and moral entitlements of animals, for several reasons. First, the successes of artificial intelligence (exemplified by cases as diverse as: the computer that beat Kasparov at chess; the Sojourner Rover that explored Mars; robots such as Cog and Kismet, which can mimic human emotions; and mobile robots such as Asimov) has led many people to ask whether non-living things can have minds too. And if (as many people believe) having a mind is a morally significant property, then do we have duties towards "artificial animals" as well as real ones?

Another reason which prompted me to write about animals was the disturbing lack of terminological rigour in the philosophical and scientific literature relating to animals' mental capacities. It is bad enough that there is no agreed definition for overtly mentalistic terms such as "mental state", "belief" or "desire", but the lack of agreed norms of usage even for lower-level terms such as "flexible", "learning" and "sense" - which might be used to define mental states - is a severe impediment to philosophical progress in the search for animal minds.

Even more alarming than the lack of definitional rigour is the lack of philosophical agreement on the appropriate methodology for investigating animals' alleged mental states. Thus one of my objectives in this thesis is to put forward a suitable methodology for investigating animal minds.

An additional reason for writing about animal minds that until very recently, most philosophers have been largely unaware of the vast scientific literature relating to the mental capacities of different kinds of animals. Although some recent philosophers (e.g. Dretske, 1999; Beisecker, 2000; Allen, 2002, 2003; Carruthers, 2004) have certainly familiarised themselves with this literature, no-one (to the best of my knowledge) has attempted a systematic philosophical overview of what we know - and don't know - about animals' mental capacities. One of my aims in this thesis is to provide such an overview.

Philosophical discussions of animals' mental capacities tend to be dominated by the question of which animals are phenomenally conscious - which (I shall argue) is not necessarily the same as asking which animals have mental states. Even on the subject of consciousness, however, the philsophical literature is curiously unbalanced. Clinical cases of abnormal neurological syndromes are picked over, and used to infer the existence of new senses of the word "consciousness"; these categories, which principally derive from research on human beings, are then assumed to apply to other animals. I argue that this piecemeal approach to the problem of consciousness ignores the "big picture". Few philosophers appear to be aware of the extensive scientific literature, stretching back to the 1930s, on the necessary neurophysiological conditions for consciousness in humans and other animals. In this thesis, I summarise what is currently known, drawing freely upon scientific overviews (e.g. Rose, 2002) of key findings in the field. Although grey areas remain, I argue that the question of which major groups of animals possess phenomenal consciousness can now be answered with a high degree of confidence.

A final reason for writing about animals is the recent proliferation of ethical dilemmas in everyday life, relating to our treatment of animals. Xenotransplantation, the breeding of decerebrated animals for food and scientific research, the culling of species in order to rescue ecosystems, and the use of cloning to save endangered species are just a few issues which spring to mind. What I attempt to do in the latter part of this thesis is propose some general ethical principles that would enable us to evaluate these cases.

What are animals?

It is a remarkable yet much-overlooked fact that the philosophers and scientists who have debated issues relating to animal minds and the moral significance of animals in recent years have done so without even attempting to define their subject - animals - at the outset. Among philosophers, the few who have provided a definition of what an animal is have generally attempted to delineate which animals qualify as morally significant, according to their ethical theories. Scientists who have written about animal minds have been even less forthcoming, generally preferring to discuss what animals can do rather than defining what they are.

While the two leading philosophical proponents of animal liberation, Singer and Regan, have written extensively on the subject of which animals are ethically significant, and to what degree, neither even attempts to address the question of what animals are. For Singer, it is animals that are capable of feeling pain that matter; for Regan, only subjects-of-a-life, or animals which "have beliefs and desires, possess memory and expectations about ... the future, and are able to act intentionally in seeking to fulfill their desires and purposes" (1988, p. 76), qualify as bearers of rights, although he acknowledges that there are moral constraints on our dealings with other animals. More recently, Varner (1998) has drawn extensively on findings from the fields of behavioural science and neurophysiology, in an attempt to identify which animals can be said to experience pain and desire, but at no stage does he define the term "animal".

Leahy (1994) is the only recent philosopher I know of who offers his readers some sort of definition of animals, which is independent of his ethical perspective. He writes:

This then is the status of animals. They are primitive beings, to recall the psychic hierarchy of Aristotle and Aquinas and the scientific legacy of Darwin, spanning the continuum between plants and human beings. They exhibit the pre-linguistic sensations of pain and the ancestral tokens of human attributes such as deliberative intent, rational planning, choice, desire, fear, anger, and some beliefs, where our guiding criteria are the close similarity of their behavioural patterns, in like circumstances, to our own. (1994, pp. 165-166, italics Leahy's).

While this passage tells us quite a lot about animals' mental capacities and behaviour, it does not tell us what animals are, how we can identify them, or even how we can distinguish them from plants. Leahy later asserts that "animals, unlike plants, are conscious" (1994, p. 166), but suggests elsewhere (1994, pp. 126-127) that not all animals are conscious, and that the line between conscious and non-conscious animals is inherently blurred, because there is an element of arbitrariness in our linguistic decision as to whether a piece of animal behaviour is sufficiently similar to our own (in similar circumstances) to be described as "conscious".

Writing from a scientific perspective, Griffin (1992), Gould & Gould (1994) and Budiansky (1998a) are fairly representative of naturalists who have addressed issues relating to animal cognition in recent years. None of these authors offers even a perfunctory definition of the term "animal". While these authors discuss various kinds of cognitive feats that animals are capable of, they fail to delineate clearly which kinds of animals are capable of which feats.

Detailed discussions of the sensory and learning capacities of different groups of animals can be found in textbooks on animal nervous systems, behaviour and learning. However, these textbooks tend to use clinical rather than "mentalistic" terminology, and avoid discussing the implications of their material for the philosophical problem of animal minds, let alone the ethical problems regarding animals' interests.

The absence of a true definition of "animal" in the philosophical and scientific literature explicitly dealing with animal cognition and the ethical issues relating to animals is puzzling. Although there is no universally agreed definition of the term "animal", animals share certain features and can be distinguished from other kinds of organisms by certain traits. Roughly, animals are multicellular organisms whose cells are eukaryotic (contain nuclei) and are surrounded by a characteristic extracellular matrix, composed of collagen and elastic glycoproteins. Animals, unlike plants, are heterotrophic, feeding on other organisms. Another distinguishing feature of animals is that their embryos go through a blastula stage.

I have deliberately highlighted the failure of previous authors to define the term "animal" because I believe it has blighted their approach to the question of animal minds. A sensible way of investigating this question might be to first examine the biological properties that define animals, and attempt to identify those properties that may be relevant to having a mind or having interests. One would start with a large group - all animals, or even better, all organisms - and look for mind-relevant characteristics, gradually narrowing one's focus to smaller groups until one found a set of physical characteristics that was sufficient to warrant the ascription of mental states to a creature.

Instead, the reverse procedure has been generally followed: writers have attempted to define what they are looking for (e.g. what it is to have a mind, or have interests) and then identify the animals that have it. My concern with the latter approach is that those who proceed in this way may be unintentionally blinded by their own philosophical pre-conceptions, leading them to overlook properties that are relevant to what they are looking for.

Accordingly, one of my objectives in this thesis is to put forward an alternative, a posteriori approach to the perennial questions of which animals have minds, what kinds of minds they have, and what kinds of interests can be imputed to them as a consequence. I shall therefore attempt to minimise the philosophical assumptions I make about mental states at the outset of my enquiry.

Why write about the moral entitlements of animals and other organisms?

Animals have received a lot of attention in philosophical circles, especially since the publication of Peter Singer's "Animal Liberation" in 1975. Some scientists and philosophers (e.g. Dennett, Gould and Griffin) contend that there is a difference of degree rather than kind between human and animal minds, and have been prepared to attribute language use, logical thinking, rudimentary self-consciousness and even moral awareness to some animals (especially apes and dolphins). Their assertions have been roundly criticised by other philosophers (e.g. Leahy, 1994). However, if these claims proved to be correct, that would certainly force a re-assessment of the moral status of animals.

There is a diversity of philosophical opinions regarding the conditions animals need to satisfy in order to qualify as morally significant. Some philosophers argue that we have no moral duties to animals as such, because they do not possess the qualifications necessary to be morally significant beings. These minimum qualifications are variously defined as rationality (Aquinas), self-consciousness (Kant) or the ability to enter into contracts (Leahy, Carruthers). One consequence of this line of thinking is that we cannot wrong animals, no matter what we do to them.

Utilitarians such as Singer (1977), influenced by Bentham's famous defence of animals, contend that our duties towards animals arise from the simple fact that they can suffer, which gives them an interest (no less valid than our own) in avoiding suffering and creates an obligation on our part not to hurt them, except when competing utilitarian interests take precedence. On this point of view, however, plants and non-sentient animals have no interests that merit consideration.

Finally, some philosophers (Taylor, 1986; Varner, 1998) have defended a biocentric ethic, which places value on all living things. The principal objection to this viewpoint is that organisms lacking mental states cannot be said to have interests, as they are incapable of taking an interest in anything. An additional problem is that opinions differ on whether an individual organism may be destroyed in order to protect the holistic good of the community to which it belongs (see for instance Callicott, 1980; Varner, 1998).

Creatures that have interests of their own are creatures towards whom we have prima facie duties, at the very least. However, certain philosophers such as Regan and Francione have gone much further, arguing that some animals (e.g. for Regan, mentally normal mammals aged one year or more) possess sufficient cognitive sophistication to qualify as bearers of rights. Other philosophers (e.g. Leahy, 1994) have attacked these arguments, for overlooking certain vital distinctions between people and other animals.

One of my aims in writing this thesis is to put forward a clear, coherent account of the kinds of interests can be meaningfully ascribed to animals and other organisms. I argue in chapters five and six that identifying the interests of humans and other kinds of living things facilitates resolution of outstanding questions relating to our duties towards creatures, their rights, and our own moral entitlements as human beings.

Methodology - Questions that need to be addressed


Before I discuss animal minds, I propose to place them within their biological setting, by defining what it means to be a living organism, and then defining what it means to be an animal. Any attempt to define the meaning of the term "life" will have to address the following general issues:


Any discussion of animal minds will have to address the following methodological questions:

Interests of Living Organisms

With regard to the interests of animals and other organisms, the following methodological questions are relevant:

Methodological proposals for this thesis

1. What does it mean to be alive?

Any philosophically adequate definition of "life" must be scientifically well-informed. Accordingly, I make reference to scientific definitions of life that are drawn from recent textbooks, as well as some recent proposals that have emerged from inter-disciplinary symposia of scientists who have met to discuss this subject.

I conclude, on the basis of my brief survey, that contemporary scientific definitions of "life" are inconsistent, philosophically muddled and as a rule, not sophisticated enough to withstand the devastating attack on the validity of the distinction between "living" and "non-living", made by Wolfram (2002) who, as we have seen, claims that all of the general properties that distinguish living things can be mimicked by abstract computational systems.

To formulate an adequate definition of life, in response, I have found it necessary to return to a very old source: the writings of Aristotle, the first (and arguably most original) philosopher to comprehensively address the issue of what it means to be alive. The key insight which I borrow from Aristotle is his description of the soul as both the final and formal cause of a living body. I attempt to re-interpret this claim, using current concepts drawn from systems theory, computing science and biochemistry. I argue that a proper understanding of this claim not only answers Wolfram's critique of modern definitions of life, but also sheds light on other outstanding problems in philosophical biology: it can tell us how we are supposed to distinguish the needs of artifacts from the interests of organisms; how to construct an artifact that could truly be said to be alive; how to identify any biological interests which organisms may have, apart from their desires; and why the satisfaction of organisms' interests matters ethically.

Besides their distinguishing formal and finalistic properties, living things also possess essential material and causal properties, as Aristotle himself recognised - even if they share many of these properties with non-living systems. I also attempt to catalogue these necessary properties.

Additionally, I argue that living things need to possess certain dynamic properties to enable them and their descendants to adapt to change in an unstable world. The vital contribution made by Darwin's theory of evolution to my proposed definition of life is that it highlights these necessary dynamic properties. I conclude that Aristotelian and Darwinian approaches to the problem of life actually complement one another rather than opposing each other, as is commonly believed.

2. Mental states in animals

There are many different kinds of evidence for mental states that merit serious philosophical consideration, but there is one kind of "evidence" that should, I believe, never be appealed to. Arguments or thought experiments pertaining to mental states which are based on mere logical possibility should are philosophically illegitimate. To show that a state of affairs is logically possible (or not obviously logically impossible) does not establish that it is physically possible. We can imagine organisms that look and even act like us, but have no subjective experiences, as in Chalmers' "zombie world" (1996, pp. 94 - 99); we can also imagine entities such as gas clouds, force fields or ghosts having mental states. All this proves is that mental states are not logically supervenient on their underlying physical states. However, as Chalmers himself points out (1996, p. 161), they may still be physically supervenient on these states.

2(a) Is there a set of necessary and sufficient conditions for having a mind?

One of my provisional objectives in this thesis is to list the necessary and sufficient conditions for possessing mental states. I refrain from assuming that there is a unique set of sufficient conditions for having a mind. On the contrary, there may well be several varieties of "minimal minds".

Of course, the attempt to define a set of sufficient conditions for having a mind may well fail. That in itself would be a philosophically significant result. We should not expect to find neat definitions for every concept, and the concept of "mind" may prove too elusive for such a definition.

Then again, it may not. My aim is not to define "mind" in all its possible varieties, but to define the conditions an individual would have to satisfy before it could be said to possess the most primitive kind of mind there could be - a "minimal mind", as I call it.

There are two plausible-sounding reasons for believing that any attempt to define the conditions for a minimal mind is doomed to failure. First, it could be argued that the concept of mind, like that of a game (discussed by Wittgenstein), is incapable of definition, because it is inherently open-ended. But even though the concept of "mind" appears to be open-ended, there is no reason why the concept of a minimal mind should be. A minimal mind may well turn out to be definable in terms of a small, finite set of properties.

Second, it might be argued that the concept of mind - even a minimal one - is indefinable because it is inherently subjective: it can be experienced, but cannot be understood, but only experienced. However, the common assumption that subjectivity is a defining feature of "mind" may simply reflect the fact that subjective, first-person states or "phenomenal consciousness" are commonly regarded as an essential feature of human minds. In any case, a great many of our own mental states occur below the level of conscious awareness. For instance, it has been established experimentally (Berridge, 2003) that our minds can process subliminal stimuli which appear and disappear too quickly for us to register them on a conscious level, and many of our habitual actions are not performed consciously. It may turn out to be the case that for creatures with minimal minds, the element of phenomenal consciousness is wholly lacking from their mental states. It would therefore be dogmatic to assume that subjectivity is the defining feature of mind.

I conclude that the enterprise of defining the conditions for a minimal mind remains a viable one.

2(b) What is the relationship between being alive and having a mind?

There are some philosophers and scientists who maintain that all living things - even the humblest bacteria - possess minds of some sort. If they is correct, then the animal kingdom is a small subset of a much larger group of individuals with minds. Indeed, it has been argued (e.g. by Birch, 2001, p. 4 ff.; Chalmers, 1996, pp. 293 - 299) that any individual (even an electron), or any system that registers information, is capable of undergoing experiences. This point of view is known as panpsychism.

It is not the aim of this thesis to address such a grand metaphysical claim. Instead, I propose to discuss three narrower questions. First, given that all natural systems - including living things - can be viewed as computational devices, for reasons which will be elaborated in chapter two (see Wolfram, 2002, pp. 720 - 721), we have to ask: is there any difference between living things and artificial computational devices, which would preclude the latter from having minds? In other words, is being alive a necessary condition for having a mind? (The implicit assumption here, that artifacts - as we know them - are not alive, will be addressed in chapter one.)

Second, is being alive a sufficient condition for having a mind, as some researchers have argued? For instance, some researchers argue that bacteria satisfy all the pre-requisites for what they call "minimal cognition" (Di Primio, Muller and Lengeler, 2000), while others claim that microscopic organisms such as amoebae and paramecia are capable of associative learning (classical conditioning). For some philosophers, this would constitute evidence of having a mind of sorts.

Third, if being alive is not a sufficient condition for having a mind, then what is? What kinds of creatures have mental states?

I make four broad assumptions about mental states as they occur in living organisms. First, mental states don't just "pop up" in any entity, for no reason. Any creature that possesses mental states must have some innate capacity for having these states. (The same requirement would apply to any artificial device that was found to possess these states.)

Second, a living creature's capacity for mental states is grounded in its biological characteristics. I am not here equating mental states with biological properties; rather, I simply assume that differences in organisms' mental capacities can be explained in terms of differences in their physical characteristics. This in no way commits me to the much stronger (and more speculative) supervenience thesis, which states that all mental properties and facts supervene on physical properties and facts.

Third, I assume that the mental capacities of animals supervene upon (or are grounded in) states of their brains and nervous systems. I am not, however, assuming that every organism with a mind must have a brain, or even a nervous system; indeed, I intend to examine alleged instances of mental states in organisms lacking nervous systems. In short, what I attempt to do in chapter two is to identify the set of biological capacities that warrant the ascription of mental states - however rudimentary they may be - to an organism.

Finally, I make the extremely modest assumption that at least some non-human animals possess the requisite capacities for a minimal mind, even if (as a few philosophers and scientists argue) they are lacking in phenomenal consciousness. The assumption that some animals have mental states is woven into our own language to such a degree that animals often serve as primary exemplars of these states. (This is especially true for words used to describe desires and feelings.) To deny mentality to all non-human animals would thus render much of our mentalistic terminology meaningless.

2(c) What are the most primitive mental states?

After discussing the methodologies that have been proposed for identifying mental states, I critically examine two approaches - the computational approach of Steve Wolfram and the intentional stance developed by Daniel Dennett. In Wolfram's approach natural and artificial systems are regarded as devices that can be used to compute the solutions to a range of problems, whereas in Dennett's intentional stance, these systems are viewed from a hypothetical standpoint, as if they were intentional agents with goals of their own, coupled with beliefs about the appropriate means of attaining their goals and desires for the relevant means and ends.

Although I identify some serious problems with both approaches, I argue in chapter two that both are philosophically fruitful. Because the terminology behind Wolfram's and Dennett's approaches can be applied to a wide range of natural and artificial systems, I argue that it is plausible to assume that the two approaches apply to all entities with minds. In other words, all entities with mental states can be regarded as computational devices, and their behaviour can be modelled as if they were intentional agents with beliefs and desires. I have chosen to use Dennett's intentional stance as a methodological starting point in our quest for animals with mental states.

Of course, many intentional systems lack mental states: mindless artifacts such as thermostats (to use an example of Dennett's) can be described using the intentional stance, and even the most "primitive" organisms (which may well turn out to be mindless) can be described in this way. Accordingly, one methodological proposal I make in this thesis is that we should ascribe mental states to an organism if and only if doing so allows us to describe, model and predict its behaviour more comprehensively, and with as great or a greater degree of empirical accuracy than alternative, non-mentalistic accounts. Such an organism, I argue, qualifies as a bona fide intentional agent. In all other cases, the ascription of mental states to entities is scientifically unhelpful.

I also claim in chapter two that Dennett's intentional stance can be divorced from the use of terms such as "beliefs" and "desires". An alternative "language game" is possible: Dennett himself uses the language of "information" (1997, p. 34) and "goals or needs" (1997, pp. 34, 46) to describe the workings of thermostats (1997, p. 35). Thus at least two different kinds of intentional stance can be adopted to explain the behaviour of an organism - a mind-neutral "goal-centred" stance which describes the organism's behaviour in terms of its goals and the information available to it - and a mentalistic "agent-centred" stance which views the organism as an intentional agent and explains its behaviour in terms of its desires and the beliefs it entertains.

My investigative procedure will be to adopt a mind-neutral "goal-centred" intentional stance by default, switching to an agent-centred stance if and only if it turns out to be a more scientifically productive way of describing and explaining an organism's behaviour.

Some verbs in the English language are peculiarly reserved for mental states, and are therefore unsuitable for a mind-neutral intentional stance. The choice of these verbs may change over time: at one time, the suggestion that an individual could sense an object or remember it mindlessly would have seemed odd, but today, we have no problems in talking about the sensor in a thermostat, or the memory of a computer (or even a piece of deformed metal). Indeed, there are many verbs now used to describe organisms' behaviour, which no longer carry mentalistic overtones: signal, respond, attack, attract, repel, search, avoid, react, communicate and compete, to name just a few. It is a matter of convention that these verbs have been shorn of their former intentional connotations. These neutral verbs may be used freely, when understood properly.

For verbs that currently retain a mentalistic connotation, special care must be taken to make sure that they are not employed in a way that robs them of them of their mental content. The identification of this minimal mental content is an important philosophical task.

My proposed methodology I for investigating primitive mental states could be criticised on the grounds that it embodies too many preliminary assumptions, when it stipulates that all entities possessing mental states must be capable of entertaining beliefs and desires. However, this requirement should be seen as fairly innocuous, as I do not specify in advance what a belief or desire might be. Philosophical definitions of "belief" range from the extremely stringent (Aristotle famously denied belief to irrational animals) to the ultra-lax (Dennett, for instance, is willing to ascribe beliefs to thermostats). I shall attempt to formulate a constructive definition of the term in chapter two, where I also address the objection that some of our mental states appear not to require the occurrence of beliefs and desires.

In particular, I do not wish to prejudice my enquiry by assuming that animals with beliefs and desires necessarily have subjective, first-person, "phenomenally conscious" mental states. The issue of which animals are phenomenally conscious will be deferred until chapter four.

2(d) How do we identify the occurrence of primitive mental states?

In this thesis, I propose to adopt an a posteriori, biological, "bottom-up" approach to the philosophical problem of animal minds. Instead of first attempting to define what a minimal mind is and then seeking to determine which animals fall within the scope of my definition, I shall begin by trying to define what an animal is. This is not merely a scientific matter: while a zoologist may be able to tell us how animals differ from closely related organisms such as plants and fungi, it is the task of philosophy to untangle questions such as what it means to be an organism (i.e. "alive") or whether a robotic bee should be classified as an animal (and if not, why not).

Leaving aside the question of whether any non-living entities can be said to have minds (a question I discuss in chapter two), one sensible way of identifying mental states in animals and other organisms might be to first examine the biological properties that define living things, and attempt to identify those properties that may be relevant to having a mind. One would start with a large group, such as the set of all living organisms - which I shall refer to as L for convenience - and carefully examine the definition of "organism", as well as the general properties of organisms, for anything that may be relevant to having a mind. A philosophical "winnowing process" could then be applied to these features, to ascertain whether singly or in combination, they sufficed to define the conditions for having mental states. If these features proved to be insufficient, one would then narrow one's focus to a smaller set of organisms - such as the set of all animals (call it A) - and once again critically examine the definition, as well as those universal traits of animals that might be relevant to having a mind. One could review successively smaller sets of organisms in the same way - the set of all animals with nervous systems, the set of animals with a brain, and so on - until one found a set of physical and/or behavioural characteristics that was sufficient to warrant the ascription of mental states to a creature. These characteristics can be said to define a set M of all creatures with mental states.

This is the strategy I propose to adopt in chapter two of this thesis. I make no assumptions at the outset regarding the scope of M. M may turn out to be co-extensive with L, or it may be co-extensive with the set of all animals (call it A), or it may be a subset of A. In the process of converging from L to M, I hope to build up a set of conditions that may be relevant to the possession of a mind by an individual. As each new condition is added, the question of whether the set of conditions is necessary and/or sufficient for having a mind will be re-visited.

Henceforth, I shall generally focus on organisms which are developmentally mature and physically normal, as my primary concern is to identify the species whose members can be said to have minds, rather than ascertain which individuals have minds. None of what follows should be taken as belittling the interests of immature organisms or organisms with physical abnormalities, in any way. I address the duties we have towards such organisms in chapter five.

If M turns out to be a subset of L, then how should we construct a sufficient set of conditions for a species' being a member of M? One way might be to systematically catalogue all of the biological and behavioural characteristics of the largest groups of organisms, searching for any property that might be relevant to having a mind before passing on to a smaller group, and finally stopping one's search after having built up a set of conditions that might serve to define a "minimal mind". However, doing the job properly would be a mammoth enterprise, requiring much more work than a single thesis to address it adequately. It would also make rather tedious reading, as the vast majority of organismic traits will have no relevance to the possession of mental states.

Instead, what I propose to do is narrow down my search by examining several broad categories of behavioural and biological properties that have been proposed in the philosophical and scientific literature as relevant to having a mind, and sift through them, all the while attempting to put together a constructive definition of "minimal mind". In particular, I discuss sensory capacities, memory, flexible behaviour, the ability to learn, self-directed movement, representational capacity, the ability to correct one's mistakes and possession of a central nervous system. Within each category of of "mind-relevant" properties, I examine the different ways in which these properties are realised by different kinds of organisms. The biological case studies which I invoke range from the relatively simple (viruses) to the most complex (vertebrates). In other words, I propose to converge from L towards M within each category of "mind-relevant" properties.

The risk of pursuing the strategy outlined above is that one may overlook a mind-relevant biological property which no-one has drawn attention to yet. This risk should not be under-estimated. Until the "mammoth enterprise" alluded to above has been completed - which is unlikely to be any time soon - any proposed list of sufficient conditions for having a mind should be treated as tentative and provisional. New conditions may need to be added as we learn more about living things.

I suggested above that there may turn out to be more than one set of sufficient conditions for having a mind. The step-by-step accumulation of necessary and/or sufficient conditions for having a mind may not simply converge towards a single set. Instead, it may converge on several distinct sets of animals with minds. If M is defined by more than one set, then the set of sufficient conditions for having a mind may turn out to be a conjunction of (a) the conditions shared by species in all these sets, and (b) a disjunction of the extra conditions satisfied by species in each set belonging to M. In this case, only the conditions identified in (a) will be necessary for having a mind.

A final caveat is in order. It should not be assumed that animals which are phylogenetically closer to members of M are necessarily smarter. It may be the case that there are separate "islands of mentality" in the animal kingdom. For instance, echinoderms (such as starfish), which are closely related to the vertebrates, are not renowned for their cognitive prowess, whereas some of the more distantly related insects and molluscs are much "brainier", both neurophysiologically and behaviourally.

2(e) Emotions

In the third chapter, I discuss animal emotions - in particular, the kinds of emotions a "minimal mind" could be said to possess. I make certain assumptions about emotions - in particular, that they are mental states (not necessarily conscious ones) which are amenable to scientific investigation, and that they typically have intentional objects (i.e. are about something). Thus on methodological grounds I reject both pure subjectivist theories, which envisage emotions as essentially private inner states and thereby exclude them from the domain of science, and behaviourist accounts, which define emotions purely in terms of outward behavioural dispositions and thus fail to explain why emotions should be treated as mental states. That still leaves us with a bewildering variety of rival theories, including cognitivist accounts, appraisal theories, the James-Lange theory (which construes emotions as internal bodily states), and neurophysiological accounts.

Instead of discussing these theories in depth, I propose to evaluate each of them using two sets of criteria: first, is the theory compatible with the available neurological and behavioural evidence, and second, does it meet the philosophical requirements stipulated - in particular, can it explain the "aboutness" of emotions? A good theory of animal emotions should also tell us how to identify and distinguish different kinds of emotions in animals, in a way that allows us to ascertain which animals have them.

2(f) Phenomenally conscious mental states

In the fourth chapter, I discuss the issue of animal consciousness. The chapter is a brief summary of my extensive findings (which I intend to publish in greater length in a philosophical journal), and is divided into two parts: one dealing with phenomenal consciousness, the other with so-called "higher-order" mental states.

Phenomenal consciousness (or subjective awareness) remains a subject concerning which philosophers have spectacularly failed to reach agreement - even regarding the most fundamental questions. The source of this disagreement, I suggest, is an over-reliance on philosophical analysis. Analysis alone, I contend, cannot supply us with a scientific definition of an essentially first-person concept such as "phenomenal consciousness", as such a definition would have to be couched in third-person terminology. Any attempted equation between a third-person scientific concept with a first-person psychological concept can only be justified in the light of empirical research; no a priori analysis of the meaning of "consciousness" cannot bridge the first-person-third-person divide. At the present time, not only are we still ignorant of what phenomenal consciousness is, we do not even know for sure what it is for.

In the face of this massive ignorance, I suggest that we should put aside all our theoretical assumptions concerning the nature, purpose and/or survival value of consciousness, and focus on how we can reliably identify phenomenal consciousness in others.

Criteria proposed to date for identifying conscious states include linguistic, behavioural, neurophysiological and pharmacological indicators. For instance, animals that are in pain may cry out "That hurts!" (if they are human), try to escape from the painful stimulus, guard the part of their bodies that has been damaged, and reduce feeding, drinking and/or sexual activity. Additionally, they may display specific neurophysiological responses to pain, as well as pharmacological reactions to analgesic substances (opioids such as enkephalins and endorphins) that modify pain responses.

Problems arise when different criteria yield conflicting results. For example, how do we proceed with a creature (such as a fly) which (i) has a brain and central nervous system that is much simpler and more compartmentalised than our own, (ii) withdraws from aversive stimuli such as high temperature, noxious chemicals or electric shock, (iii) modifies its aversive response when administered opioid substances, but (iv) does not protect injured parts, and (v) does not reduce feeding or sexual activity after injury (Smith, 1991)?

It is tempting - but, I would argue, wrong - to conclude, as Leahy suggests (1994, p. 139) that there is no "right answer" to the question of whether flies feel pain, or that linguistic convention should decide the matter. What is needed is a careful assessment of the criteria for identifying conscious states.

Philosophers have traditionally focused on the behaviour of animals when discussing criteria for animal consciousness. Wittgenstein's notion of "the primitive, the natural expressions of the sensation" (PI 244) exemplifies this kind of thinking. While I would agree that behaviour is the ultimate yardstick for the ascription of consciousness to animals, I argue that attempts to infer the occurrence of consciousness from animals' non-verbal behaviour are notoriously fallible. To most laypeople, it seems self-evident that an organism possessing the capacity to learn, or the ability to avoid noxious stimuli, or visceral emotional reactions to stimuli, is aware of something. However, the scientific evidence overwhelmingly suggests otherwise.

The use of pharmacological criteria to identify conscious pain is even more problematic, as the original biological function of the body's pain-killing opiates was not to alleviate pain, but to fight bacteria and signal the body's immune system (Stefano, Salzet and Fricchione, 1998). The presence of these substances in an animal's body is therefore an unreliable indicator of phenomenal consciousness.

For animals lacking language, there remains only one set of criteria that might enable us to identify their conscious states: neurophysiological criteria. Until fairly recently, philosophers have shied away from discussing these criteria; possibly, they were influenced by the argument that "[a]ll the research on 'wiring' and 'switchboards' does not tell us if the animal suffers" (Grandin and Deesing, 2002). I contend on the contrary that animals are not "black boxes", and that neurophysiology can reveal the necessary conditions for consciousness not only in human beings but also in other animals. We can justifiably extrapolate the conditions for consciousness from ourselves to other animals whose neuroanatomy is similar in design to our own. Additionally, any requirements for phenomenal consciousness that arise from the general properties of neurons can be reasonably assumed to apply to other organisms.

Although it receives far less philosophical attention than it merits, the scientific literature relating to what neurologists term "primary consciousness" in humans and other animals is massive, and the behavioural criteria for identifying it have been carefully refined over the last few decades. The basic assumption made is that any individual who is able to report accurately on events going on in her surroundings is conscious. Since this "reporting" does not have to be verbal - one could press a button to report what one sees - consciousness in non-human animals is a legitimate object of scientific study.

I suggest that the neurological literature on primary consciousness offer the best way of making headway in the debate as to which animals are phenomenally conscious. I argue that possession of primary consciousness is a sufficient condition for phenomenal consciousness. Whether it is a necessary condition is another matter: a small but significant minority of neuroscientists suggests that animals have a second, more phylogenetically ancient form of consciousness, which they call "affective consciousness" (Panksepp, 1998, 2003). I evaluate their evidence before drawing my own conclusions on the appropriate neurological and behavioural criteria for identifying phenomenally conscious states in animals.

Finally, I argue that the various philosophical usages of the word "consciousness" do not reflect natural divisions in the animal world, and propose an alternative set of categories, based on animal studies.

2(g) Higher-order mental states

In the second part of chapter four, I discuss "higher-order" mental capacities such as: the capacity for abstract concepts; meta-cognition; a capacity for rational thinking regarding means and ends; an awareness of the beliefs and desires of other individuals; self-awareness; language use; and moral agency.

There are plausible scientific grounds for believing that some non-human animals may possess these capacities (Savage-Rumbaugh et al., 1998, Hart, 1996; Pepperberg, 1999; Budiansky, 1998a, 1998b; Reiss and Marino, 2001; Gallup, 2002; Nissani, 2004; Leahy, 1994; Horowitz, 2002; Griffin, 1994; Whiten and Byrne, 1997; Young and Wassermann, 2001; Huber, 2001; Zhang, Srinivasan and Collett, 1995; Zhang, 2002; Zhang, Bartsch and Srinivasan, 1996; Giurfa et al., 2001; Brannon and Terrace, 2000; Weir, Chappell and Kacelnik, 2002; Bekoff, Allen and Burghardt (eds.), 2002). The ethical implications of a positive finding are obvious: these animals may be entitled to the same basic rights as people are.

I have rejected a "bottom-up" biological approach to the question of whether non-human animals have "higher-order" mental capacities. This approach is well-suited to an enquiry into minimal minds, where we are attempting to formulate a constructive definition of the object of our investigation. Constructive definition can be a useful way of making philosophical progress when the underlying intuitions of different "camps" clash. However, the main philosophical problem associated with "higher-order" capacities is not one of definition - we have a fairly good idea what they are - but one of (mis)identification. Because a "bottom-up" approach proceeds by examining dubious instances of some mental capacity first before looking at more clearcut examples, it runs the methodological risk of blurring philosophical distinctions and thereby overlooking essential properties of the relevant capacity. This is a mistake we cannot afford to make when investigating capacities which many authors consider to be distinctively human traits. The assumption that animals have higher-order states has been a contentious issue since ancient times (Sorabji, 1993). By contrast, it can be safely assumed that at least some non-human animals possess primitive mental states.

Accordingly, when investigating higher-order capacities in other animals, I propose to follow a "top-down" approach, looking at paradigm cases in human beings of acts which manifest these "higher level" mental capacities.

But what are these paradigm cases? For some philosophers (e.g. Chalmers, 1996), the most salient feature of a mental state is its subjectivity. Although it is certainly that the "inner feel" of a twinge of pain makes its mental status indubitable to its subject, I suggest that its very privacy stymies any effort by an outsider to understand what is happening to the subject. For this reason, the paradigm cases I have selected are voluntary, intentional human acts that take place in the public arena - acts such as exchanging rings in a wedding ceremony, signing a will, casting a vote, or planning a hunting expedition. Such acts are undeniably mental: they can only be understood by attributing beliefs, attitudes, desires and intentions to the parties involved. They also presuppose higher-order mental states, insofar as the participants' intentions refer to other individuals' beliefs or desires. The acts also have an irreducibly subjective aspect, as the participants are motivated by their personal feelings.

It might seem natural for animal researchers to identify higher-order mental acts occurring in the public arena by making inferences from observed behaviour, such as tool use (Griffin, 1994, pp. 101-102). However, as Budiansky points out (1998a, pp. 122 - 128), establishing that an instance of animal behaviour requires a "higher-order" interpretation is extremely difficult.

The methodology I propose to adopt in evaluating alleged instances of behaviour manifesting higher-order capacities in animals is to focus on two questions in particular. First, can the behaviour be explained in terms of what Dennett (1996) refers to as first-order beliefs and desires - which, as I argue in chapter two, can be found even in animals with "minimal minds" - or does it require us to impute second-order beliefs and desires to the animals involved? A second-order intentional system, as defined by Dennett, has beliefs and desires about beliefs and desires - either its own or those of other individuals.

Second, what kind of internal representational capacities does the behaviour in question presuppose? I argue in chapter two that even animals with minimal minds can form representations of events in their surroundings which they can update as new information comes in, so one would expect animals with "higher-order" capacities to be able to do something even more impressive with their representations.

One obvious candidate for that more impressive "something" is a capacity for language. While I address claims of language use in animals, I shall not prejudice my investigation at the outset by specifying it as a requirement for higher-order mental states.

2(h) Which null hypothesis should we use?

Hodos and Campbell (1969) have criticised the use of terms such as "the phylogenetic scale", "the Great Chain of Being" or "Scala Naturae" as grossly misleading, on the grounds that they embody the false assumption that evolution proceeds in a linear fashion, with humans at the top. In reality, modern animals have evolved from less specialised proto-forms, in a way that meets the requirements of their ecological niches. Today's monkeys cannot be taken as representative of the ancestors of humans.

On the other hand, part of the explanatory value of Darwinian theory is the concept of a gradual, step-by-step sequence of design improvements, with superior designs supplanting inferior ones, and the study of anagenesis is a perfectly legitimate area of biological research (Yarczower and Hazlett, 1977, 1979). The intuitive appeal of progress in the mental complexity of different creatures remains powerful, especially when one considers the various classes of vertebrates, although Walker (1983) carefully rebuts the notion that the evolutionary tree of vertebrate ancestry even approximates to a linear scale.

Even Macphail (1982), while stressing the commonality of association formation among vertebrates - which is why he regards them as equally intelligent - has acknowledged that species differ widely in cognitive capacities related to perception, memory, motor skills and motivation. The cognitive differences between, different phyla or even kingdoms of organisms are much more profound, given the enormous variation in their neuroanatomy, and the fact that the ecological niches occupied by some organisms are much more mentally challenging than those of other life-forms. However, for the purposes of this thesis, I have to assume at the outset that all species possess the same kinds of mental capacities, until the contrary can be shown. A strong prima facie case can be made that even the most "primitive" cellular life-forms possess significant mental capacities. Di Primio, Muller and Lengeler (2000), argue that bacteria are capable of the following cognitive feats: they have internal and external sensors of different types, respond to stimuli in ways which are subsequently modifiable, identify and compare stimuli at different times using a simple memory, integrate positive and negative stimuli when given simultaneously, pursue goals purposefully, communicate by means of signalling molecules and by exchanging genetic information, and co-operate and compete with bacteria of the same and of other species. Regardless of whether these claims prove to be correct, no unbiased investigator can afford to ignore or belittle them.

2(i) Drawing the Line

While some scientists and philosophers have argued that mental states occupy a continuum from human beings to the smallest cell, others maintain that there is a clear-cut divide between organisms that have minds (or for some writers, organisms that possess consciousness) and those that do not. Is there anything in my proposed methodology that commits me to either view?

I have argued that before we decide to explain a certain kind of behaviour in an organism in terms of its mental states, we should ask: "What is the most appropriate way of describing this behaviour?" In other words, is there some scientific advantage in explaining the behaviour in mentalistic terms - e.g. can it be described more completely or predicted more accurately? Either mental states do or do not further our understanding of the behaviour in question. The decision to impute these states is not one that admits of degree, although the grounds for making the decision might be much stronger for some animals than for others. On methodological grounds, then, I am committed to looking for "on-off" criteria for ascribing these states to organisms. Whether I shall find such criteria is another matter.

2(j) How generous should we be in assessing claims for mental states?

The prevailing tendency in the scientific literature on animal minds is to use mentalistic explanations only as a last resort. Phrases like "genetically programmed", "neural mechanism" and "evolutionary selection" are brandished, to discourage mentalistic terminology.

In fact, there is nothing to rule out mentalistic and physicalistic explanations existing side-by-side, as Griffin contends:

An animal may or may not be conscious, and its behaviour may be influenced to varying degrees by genetic programming. These are actually quite independent considerations, and any combination is possible. Learned behaviour is not always consciously acquired or executed, even in our own species, and it may be even less closely linked to conscious awareness in non-human animals. Likewise, a genetically programmed behaviour pattern may or may not be accompanied or guided by conscious thinking (1994, p. 254).

Nevertheless, simplicity is generally regarded as an explanatory virtue, and many authors on the subject of animal awareness have argued for their position on the basis that it was the simplest reading of the available evidence. Occam's razor, which tells us "never to multiply entities beyond necessity", is frequently cited by minimalists to dispense with mentalistic explanations for animal behaviour as redundant.

Rene Descartes, who referred to animals as "nature's machines", is commonly thought to have been the first philosopher to apply Occam's razor systematically to animals. Discussing the mechanics of animal movement, he argued:

[I]f there were such machines which had the organs and appearance of a monkey or of some other irrational animal, we would have no means of recognising that they were not of exactly the same nature as these animals (1968, p. 73).

(Interestingly, it was Aristotle who first likened animals to automata in chapter 11 of his work, On the Movement of Animals - an analogy he immediately rejected, on the grounds that incoming information could cause quantitative alterations in animals' organs, but had no such effects on the internal components of the automata with which he was familiar. The first philosopher to claim that animals are true machines, lacking sense as well as intelligence, was the Spanish philosopher Gomez Pereira, whose spirited defence of the scientific method, Antoniana Margarita, was published posthumously in 1554, eighty years before Descartes made a similar claim.)

In recent times, the behavioural scientist J. S. Kennedy, who agrees with Descartes' contention that animals are "in principle, machines" (1992, p. 1) has written:

It might seem necessary to suppose that some animals have minds if we had no other explanation for their flexible, adaptive behaviour. But there is of course another explanation, namely the power of natural selection to optimize behaviour ... (1992, p. 121, italics mine).

However, the injunction to avoid positing redundant entities - such as minds - assumes that redundancy is a simple, "on-off" property. For instance, scientists may be able to easily describe an item of animal behaviour using "mentalese", but re-describing the same behaviour in neutral terminology may prove impractically cumbersome. Should they then reject "mind-talk" as redundant?

Other philosophers have used Occam's razor in a contrary sense, arguing that the most parsimonious explanation of the pervasive neurophysiological and behavioural resemblances between human beings (who can certainly feel) and animals is that animals also have feelings. Perhaps the first to argue in this way was Voltaire:

[H]as nature arranged all the springs of feeling in this animal in order that he should not feel? ... Do not assume that nature presents this impertinent contradiction (cited in Leahy, 1994, pp. 91 - 92, italics mine).

Voltaire's argument has been revived by the zoologist Donald Griffin, who has argued in several books that animals possess a sophisticated level of awareness:

[A]s mental experiences are directly linked to neurophysiological processes - or absolutely identical with them, according to the strict behaviourists - our best evidence by which to compare them across species stems from comparative neurology. To the extent that basic properties of neurons, synapses and neuroendocrine mechanisms are similar, we might expect to find comparably similar mental experiences (1976, p. 20).

The problem with this argument is that similarity comes in degrees. How similar does an animal's brain have to be to ours before we can be sure it has mental states?

Alternatively, if having a mind depends on possessing a "critical mass" of neural organisation, even animals with brains like ours may miss out, if they fall below the cut-off point.

The fact is that without extensive research, we simply cannot say what kind of brain an organism needs in order to have a mind.

Morgan's Canon is also used to dispense with mentalistic explanations:

In no case may we interpret an action as the outcome of the exercise of a higher faculty, if it can be interpreted as the outcome of one which stands lower in the psychological scale (cited in Bavidge & Ground, 1994).

Even leaving aside worries about its terminology of "higher" and "lower" psychological faculties, the key insight, that nature must be parsimonious in the way it "designs" (i.e. selects for) organisms that can adapt to their environment (Bavidge and Ground, 1994, p. 26) contains a hidden assumption, that it is more complicated for nature to generate adaptive behaviour by means of mental states than by other means. Griffin suggests that it may be simpler for nature to "build" an animal with the ability to think in terms of basic concepts when confronted with novel situations, than to build an animal with programmed instructions for the most adaptive behaviour in all situations, because "providing for all likely contingencies would require a wasteful volume of specific directions" (1994, p. 115). But at the present time, we simply do not know.

Wolfram's Principle of Computational Equivalence (2002, p. 721) generates parsimony problems of its own. For if (as Wolfram argues), all natural systems can be regarded as computational devices, and the vast majority of computational systems - even those with simple underlying rules - are equally complex, as measured by the range of problems they can be used to solve, then it seems we should either say that they all have minds (panpsychism), or that none of them do (eliminationism).

The demand for "simplicity" has proven to be a double-edged sword, leaving us unsure how to wield it.

The methodology which I would like to propose here, for evaluating a claim that a certain kind of behaviour in an organism is indicative of a mental state, is to proceed by asking: "What is the most appropriate way of describing this behaviour?", rather than "What is the simplest way of describing it?" We should use mental states to explain the behaviour of an organism if and only if doing so allows us to describe, model and predict it more comprehensively, and with as great or a greater degree of empirical accuracy, than invoking other modes of explanation.

According to the criterion I have proposed, there is nothing wrong with using mentalistic explanations of an animal's behaviour to complement genetic, neurophysiological or evolutionary ones, so long as there is some scientific advantage in doing so: a more comprehensive description or better predictions of the behaviour.

2(k) Appropriate Sources of Evidence

(i) Singular versus replicable observations

The use of animal anecdotes has been discredited since the days of Darwin and Romanes, who were prepared to rely on second-hand accounts of observations from naturalists and pet-owners who wrote to them. The following example from Darwin's The Descent of Man (1871) illustrates the flaws in such an approach:

Even insects play together, as has been described by that excellent observer, P. Huber [in 1810], who saw ants chasing and pretending to bite each other, like so many puppies (cited in Leahy, 1994, p. 101).

What is wrong here, I suggest, is not the singularity of the observation, but the observer's lack of "self-control", in letting his imagination run freely when interpreting what he saw. The observer uncritically imputes a high level of awareness to ants ("pretending"), and draws an unhelpful analogy with an unrelated species ("like so many puppies"). Compare this with an observation of a similar phenomenon, recently discovered by Christian Peeters and Bert Holldobler in the Indian ponerine ant, Harpegnathos saltator:

[T]he membership of a large Harpegnathos colony is organized into three social classes... Status in this complex class system is settled by a ritualized form of dueling, in which the workers wield their antennae like whips... After this odd pas de deux is repeated up to 24 times, the two combatants simply walk away from each other. There is no obvious winner, and the whole performance appears to have been no more than a reaffirmation of social equality (Holldobler and Wilson, 1994, pp. 91, 93).

While the observer still uses metaphorical language to describe the activity ("ritualized", "dueling", "wield", "pas de deux", "winner", "performance", "reaffirmation"), the language is carefully controlled. Cognitively neutral verbs are employed; the reader is not left with an impression that ants are mentally sophisticated. Speculation is kept to a minimum, and analogies with unrelated species are avoided. The explanation given for the behaviour ("a reaffirmation of social equality") is modest and plausible, given that individuals in this species of ant do occasionally change their social status, as a result of their dueling.

Thus the insistence by Thorndike and Morgan on controlled, replicable laboratory experiments, while commendable for its scientific rigour, misses the point. From a scientific perspective, the key question to be asked when assessing an observation is not: is it replicable? but: is it reliable? Laboratory experiments which have been replicated will score highly on an index of reliability, as the risk of error is low. But the risk of error is also low when a singular observation is made by an acknowledged expert in the field. I conclude that there is no good scientific reason for excluding such a singular observation. What scientists should then proceed to do is further investigate this observation and endeavour to explain it within some general framework.

As regards controlled experiments, I have decided to err on the side of caution and not give credence to experimental observations that other researchers have tried but failed to replicate. Recent research, which has not yet been replicated, will be admitted, if published in a reputable scientific journal, but any new claims made will be treated with caution.

I also reject studies whose follow-up has produced conflicting results. Where there is scientific controversy over the possession of a mental capacity by a group of organisms, a conservative interpretation of the results will be favoured.

As far as possible, laboratory studies that use a single individual will be avoided. Studies of language use in animals that focus on one individual will be admitted only if there is follow-up research relating to other animals trained according to the same methods. Because such research exists in the case of Kanzi the bonobo and Alex the African grey parrot, evidence relating to these animals will be allowed in this thesis. However, recent claims (Sheldrake and Morgana, 2002) that the talking parrot N'kisi, trained by Aimee Morgana, can converse as well as a three-year-old child, have not been replicated and or refereed in a scientific journal (to the best of my knowledge), so they will not be discussed here.

(ii) Laboratory versus natural observations

There is something to be said for observing animals in their natural state, as cognitive ethologists do, simply because such observations maintain the network of relationships between an organism and its environment. An organism in a laboratory is an organism uprooted: the nexus of connections is severed, leaving us with less information about the interactions which characterise its lifestyle. Rigour is secured, but at a high price.

On the other hand, if the research is designed to measure the relation between a small number of variables, laboratory controls eliminate contamination by external factors.

In other words, the methodologies of behavioural science and ethology should be seen as complementary, rather than contradictory. Observations of animals in the wild will therefore be admitted if they are reliably attested by an acknowledged expert in the field.

3. Interests of Animals and Other Organisms

3(a) Is my methodology ethically biased?

Although two chapters of this thesis are devoted to ethical argumentation, I wish to declare at the outset that I have no intention of deciding in favour of any particular theory of ethics. The task of evaluating the relative merits of rival theories of ethics would be far too large a topic for this thesis. There are, however, certain theories to ethics that are at odds with the approach I will be putting forward in this thesis, so I shall briefly state here why I reject these theories.

Broadly, ethical theories can be classified into six categories. Deontological theories identify certain public actions or entities as objectively good and deserving of respect by moral agents. Virtue ethics, by contrast, puts forward an agent-centred rather than act-centred account of morality, in which good acts are regarded as those which tend to improve one's moral character. Utilitarian theories define goodness as that which tends to promote a state (happiness or pleasure) in sentient individuals. Contractualist theories view goodness as the outcome of a hypothetical agreement by rational agents who have a vested interest in getting along with one another. Prescriptivist theories define goodness in terms of obedience to the arbitrary decrees of some higher authority (e.g. God or the state). (The term "arbitrary" is crucial here: if the decrees require justification on rational grounds, then that justification, and not the decree, becomes the basis for defining goodness.) Finally, subjectivist and relativist theories define goodness in terms of either individual or social preferences.

Of the theories listed above, those which I wish to exclude on purely methodological grounds are prescriptivism, subjectivism and relativism. None of these theories even attempts to derive, on rational grounds, moral norms that can be used to handle conflicts of interests. One may choose to play a "language game" in which the word "goodness" is defined as what a whimsical deity allegedly likes, or what I happen to like, or what the society I live in currently decrees that it likes, but if one elects to play any of these games, then any rational argumentation about whether something is good or not becomes impossible. There can be no argument about matters of taste.

There is a deeper problem with playing this game: why should what I like, or what society likes, be called "good"? Why not call it "bad" instead? What if I prefer to define myself as evil, and "good" as what I don't like? This language game could be played - but there seem to be very few people who wish to play it. Why? I suggest that the simple identification of "what I like" with the term "good" (rather than "bad") is intuitively appealing, precisely because most of the things I happen to like are in fact good for me, in some way that is evident to everyone, and hence objectively grounded. (Likewise, the identification of "good" with what society wants yields correct answers in the majority of cases, because for any viable society, most of the things it wants - such as public sanitation or defence against hostile enemies - will be good of necessity.) In other words, the superficial attraction of playing a "language game" whereby good is identified with the likes and dislikes of some entity (such as myself) is in fact a borrowed one: it presupposes a certain ontological background, in which interested parties generally pursue what is objectively good for them.

Another way of diagnosing the failure of the three theories I criticise is that none of them provides a systematic account of goodness. Their moral "norms" (if one can call them that) are ultimately based on someone's whims - be they one's own, society's, or those of a capricious deity who supposedly makes rules for no particular reason. (I am of course aware that there are other, more rational varieties of theistic belief. However, these forms of theism typically envisage moral norms not as prescriptions, but as somehow grounded in the nature of agents and of things, and hence amenable to rational enquiry.)

What of the remaining accounts of ethics? One of the points I argue for in this thesis is that one and the same moral norm can often be justified within the framework of different ethical theories. Rather than attempting to decide which of these theories is "right", I shall limit myself to discussing to what degree each of the major theories can accommodate the conclusions I reach regarding our duties and entitlements vis-a-vis other organisms. Obviously some theories, such as contractualism, have in-built limitations of scope as regards the focus of their moral concern, but I shall argue that even these theories can generate more powerful ethical conclusions than is commonly believed.

3(b) What constraints should we impose on an ethical theory based on interests?

Although it could be argued any approach to ethics should accord with our common sense intuitions, specifying these intuitions in advance is easier said than done, and I have chosen to avoid this path. Instead, I propose a universality requirement: any interest-based theory should be broad enough to cover the gamut of interests, no matter whose they may be. A theory that failed to meet this requirement would be an unreliable one, as its conclusions could in principle be overturned by the invocation of interests lying outside its scope.

3(c) How do we define and identify interests?

In chapter one, I argue that a sentient individual's interests cannot be plausibly identified with the totality of either its actual desires or its counterfactual desires (i.e. what an individual would want if he/she were fully informed), and I discuss Gary Varner's (1999) proposal that something may also be in an individual's interests if it serves some biologically based need that the individual has. Once we grant that the satisfaction of a sentient being's biological needs is in its interests, it is much easier to argue that the satisfaction of a non-sentient organism's biological needs is also in its interests.

On the other hand, the ascription of interests to non-living artifacts such as cars is generally considered to be a reductio ad absurdum for any theory of ethics. Any theory of ethics which posits that organisms not only have psychological interests (desires) but biological ones as well must therefore explain how these differ from the needs of non-living artifacts.

In chapter one, I consider two possible ways of formulating this distinction: one may argue, as Varner (1998) does, that the organs found within living things can be said to have functions (and hence a "good of their own") insofar as these functions conferred an evolutionary advantage on the ancestors of today's organisms (unlike artifacts, which lack ancestors); or one may argue that there is a valid teleological distinction to be made between biological functions and non-biological needs. These two approaches can be labelled as Darwinian and Aristotelian, respectively. I evaluate the merits and demerits of each approach in the first chapter of this thesis. In particular, I discuss whether these approaches can explain why the satisfaction of a biological interest can be said to be morally significant.

3(d) The "is-ought" gap

How can we derive "ought" statements from factual "is"-statements regarding organisms' biological interests? The derivation I put forward in this thesis is not an analytic one. Instead, I simply assume that moral oughts are a fact of life, and that they have some sort of rational basis. (The alternative, which I considered and rejected above, is to accept some version of prescriptivism, subjectivism or relativism.)

I then proceed by asking what kind of facts about the world could plausibly be said to ground moral oughts. I argue in chapters one and five that if anything, biological facts supply a firmer basis for these "oughts" than actual or even hypothetical (fully informed) desires could.

This line of argumentation is at odds with theories of ethics that define "goodness" in terms which cannot apply to non-sentient beings per se, and regard these beings as good only insofar as they give satisfaction to sentient beings. For instance, "goodness" may be regarded as that which tends to promote happiness or pleasure in sentient beings (utilitarianism), or as the outcome of a hypothetical agreement by rational agents who have a vested agreement in getting along with one another (contractualism).

3(e) How do we resolve conflicts of interests between organisms?

Our next task is to find some way of deciding what to do when the interests of different organisms are in conflict. Accordingly, the ethical focus of this thesis will address two major questions. First, what duties are imposed on us by the recognition of these interests? (Putting it another way, what are living things entitled to from us?) Second, what are we, as human organisms, morally entitled to do to other organisms, in order to promote our own interests?

In this thesis, I put forward a teleological account in which I defend the notion that each species possesses a nature of its own, which grounds our duties towards individual members of that species. The chief criticism that has been directed at the concept of "nature" is that it is a static, essentialist notion which is utterly unable to account for the ability of species to evolve over time. In this thesis, I defend a neo-Aristotelian account of nature which, I argue, is completely compatible with Darwinian evolution.

3(f) A general account of goodness

The general definition of goodness which I use in this thesis - and which, I suggest, is common to competing theories of ethics - is as follows: goodness is that which is in some party's interests. Where ethical theories differ is in their answers to the questions of what kinds of interests are paradigm cases and/or of paramount importance.

There is nothing in this general definition that tells us what sorts of entities have interests, and in particular, whether non-sentient organisms have them. To answer these questions, we shall first have to investigate what an organism is.