Back to Main Page Back to Introduction Back to Chapter 1 Back to Chapter 2 Back to Chapter 3
Back to Chapter 4 Back to Chapter 5 Back to Chapter 6

*** SUMMARY (Conclusions reached)

Conclusions from chapter 1
Conclusions from chapter 2
Conclusions from chapter 3
Conclusions from chapter 4
Conclusions from chapter 5
Conclusions from chapter 6

CONCLUSIONS FROM CHAPTER ONE

Definitions of life

Teleological definition of life:

to be alive is to possess intrinsic ends (Cameron, 2000, p. 333).

Intrinsic finality goes hand-in-hand with unique formal features which, characterise living things: i.e. a material object is alive and has intrinsic ends if, and only if, it possesses these formal features.

Formal definition of life:

The presence of both an Aristotelian formal cause and a final cause in a living body can be ascertained empirically by its possession of the following three biological properties:

Necessary conditions for life:

Aside from the final and formal features stipulated above, all other necessary formal, material, efficient causal and temporal conditions for something's being alive are simply requirements that any material entity must satisfy in order to be able to realise intrinsic ends (of whatever sort), given the laws of nature in our universe.

Teleological definition of "nature" and "species"

Membership of a species cannot be simply defined in terms of shared intrinsic ends. Individual members of the same species do not share identical ends: the two sexes, for instance, possess different internal organs, so their individual flourishing is realised differently. Different morphological types within the same species also have different ends.

However, for any individual organism X we select from any biological population, there are individuals (of the same sex and morphological type) among its ancestors thousands of years ago, which possessed the same intrinsic ends. This suggests a way in which we can re-define the Aristotelian concept of nature.

If for any individual X selected from a population at time t2, some ancestral individuals can be found in the same population at an earlier time t1, such that:

(i) the part-whole functionality of their organs (and hence their bodily morphology) matches that of their descendant X;

(ii) the end points of their biological body clocks are the same as those of their descendant X; and

(iii) there are no reproductive barriers such as would affect viability of offspring if - counterfactually - these ancestors were to mate with their descendant X,

then we can say that:

(a) these ancestral individuals have the same holistic ends as their descendant X;

(b) the nature of the population has remained the same over the time interval from t1 to t2; and therefore

(c) the population at t1 belongs to the same chronospecies as the population at t2.

The notion of "sameness" employed here need not satisfy the requirements of transitivity of identity.

The promotion of biological function(s) in an organism, which its body parts are dedicated to supporting, can be said to be in its interests, and hence objectively good for it.


CONCLUSIONS FROM CHAPTER TWO

In the second chapter, entitled "What does it take to possess a Minimal Mind?", I address the issue of which organisms can be said to possess mental states, and what kind of features the most basic kind of mind would have to possess.

The "minimal mind" which I describe in this chapter is not a phenomenally conscious mind - it does not experience qualia ("raw feels", such as the quality of redness that one experiences when one looks at ripe tomatoes). Nevertheless, I argue that it still deserves to be called a mind, because an animal with a minimal mind can (i) sense objects in its environment, (ii) remember new skills it acquires, (iii) flexibly update its own internal programs, which regulate its behaviour, (iv) learn to associate actions with their consequences, (v) control its own bodily movements by fine-tuning them, (vi) internally represent its current state, its goal and the pathway it needs to follow to get to its goal, and (vii) correct any mistakes it makes in its movements towards its goal, as well as any factual mis-representations of its environment, in the light of new information. For these reasons, an animal with a minimal mind deserves to be called a bona fide agent.

One of the surprises of this chapter is that there are in fact four kinds of minimal minds. I attempt to define the necessary and sufficient conditions for what I call operant agency, navigational agency, tool agency and social agency in such a way that they can only be explained by adopting what I call an agent-centred intentional stance (see definition below). Any organism that is capable of learning to acquire one of these kinds of agency thus qualifies as having beliefs, desires and intentions.

Conclusions relating to Steve Wolfram's computational description of cognitive mental states
Conclusions relating to Daniel Dennett's intentional stance and its implications for cognitive mental states
Conclusions relating to biological criteria for the possession of cognitive mental states
Conclusions relating to sensory capacities required for the possession of cognitive mental states
Conclusions relating to the relevance of memory to cognitive mental states
Conclusions relating to the relevance of flexible behaviour to cognitive mental states
Conclusions relating to the relevance of learning to cognitive mental states
Conclusions relating to what kinds of actions enable us to identify cognitive mental states
Conclusions relating to the relevance of representations to the possession of cognitive mental states
Conclusions relating to the relevance of normativity criteria to the possession of cognitive mental states
Sufficient conditions for the exercise of the four different kinds of intentional agency


Computational criteria for the identification of mental states

C.1 Our identification of computations in an entity, or rule-governed transformations that take it from one state to another, is a necessary condition for our being able to ascribe cognitive mental states to it. Justification.

C.2 All natural entities and natural processes can be described according to Wolfram's computational stance: that is, the set of natural entities which perform computations is universal. Justification.

C.3 Our identification of rules that describe an entity's behaviour (as per Wolfram's computational stance) is not a sufficient condition for our being able to ascribe cognitive mental states to that entity. Justification.


Intentional systems criteria for the identification of mental states

Intentional systems are here defined as those whose attributes possess the philosophical property of "aboutness", or being about something. Dennett contends that the behaviour of these systems can be predicted from an intentional stance, by treating the entities involved as if they were agents with information about how to attain their goals (or alternatively, agents with beliefs and desires). Dennett (1997, pp. 34 - 49) argues that we can regard all organisms - and, for that matter, many human artifacts - as intentional systems, when predicting their behaviour.

I.1 Our ability to describe an entity's behaviour according to Dennett's intentional stance is a necessary condition for our being able to ascribe cognitive mental states to it. Justification.

I.2 The set of entities which can be described by Dennett's intentional stance is not universal in scope, but includes all organisms (and their parts). Justification.

I.3 Our ability to describe an entity in terms of Dennett's intentional stance is not a sufficient condition for our being able to ascribe cognitive mental states to that entity. Justification.

Definition

A goal-centred intentional stance is an intentional stance which explains an entity's behaviour in terms of its goals and the information it has about them.
An agent-centred intentional stance is one which regards the entity as an agent who decides what it will do, on the basis of its beliefs and desires.

I.4 Before we can attribute beliefs and desires to an organism, it must be capable of exhibiting behaviour which manifests its desires for its own built-in biological ends, as well as its beliefs about those ends - i.e. behaviour best explained by adopting an agent-centred intentional stance.

I.5 Our ability to identify behaviour in an organism that can be described using the intentional stance is not a sufficient warrant for ascribing mental states to it. Justification.


Necessary conditions for the identification of mental states in organisms

Biological criteria for the identification of mental states

B.1 An entity must be alive in order to qualify as having cognitive mental states. Justification.

B.2 A necessary condition for our being able to ascribe cognitive mental states to an entity is that we can identify the following features:

(a) built-in biological needs, essential to its flourishing;

(b) a master program that regulates the internal structure of an organism and the internal interactions between its components;

(c) internal relations between the parts (i.e. new physical properties which appear when they are assembled together);

(d) a nested hierarchy of organisation of the parts;

(e) dedicated functionality, where the parts' repertoire of functionality is dedicated to supporting that of the unit they comprise;

(f) stability - the parts are able to work together for some time to maintain the entity in existence as a whole.

These conditions enable us to impute a final cause or telos to the entity, and identify its various "selfish" or intrinsic ends.Justification.

B.3 The presence in an individual of biologically "selfish" behaviour, which is directed at satisfying its own built-in biological needs, is an essential condition for the meaningful ascription of mental states to it.

B.4 An entity must be an individual biological organism in order to qualify as having cognitive mental states. An evolutionary lineage of organisms cannot be meaningfully described as having cognitive mental states. Justification.

B.5 Being an organism is not a sufficient condition for having mental states. Justification.

B.6 An organism must have a central nervous system in order to qualify as having cognitive mental states. Justification.


Sensory criteria for the identification of mental states

Definition - "sensor", "sensitive"
A sensor is any device that responds in a specific way to a physical stimulus (e.g. chemicals, heat, light, sound, pressure, motion, flow). A sensor (and by extension, any entity possessing sensors) can be described as sensitive to the stimulus to which it specifically responds.

S.2 An organism must be capable of encoding and storing information about its environment before it can be said to possess mental states (in particular, beliefs and desires). (Corollary of I.1 and S.1.)

S.3 All cellular organisms (including bacteria) possess sensors that can encode various states of information about their surroundings. Such organisms can therefore be described as sensitive to their surroundings.

Definition - "sense"
On a broad definition of "sense", any organism possessing sensors that can encode and store information relating to a stimulus, which corresponds to different states of the stimulus, can be said to sense the stimulus. On a narrower definition, the verb "sense" can be restricted to organisms possessing sensors with the ability to:

S.4 On the broad definition used above, all cellular organisms (including bacteria) can be said to possess senses. In eukaryotes (but not prokaryotes), sensors initiate movement towards goals; and reflexes appear to exist in two species of coelenterates, as well as all "higher" phyla of animals.

S.5 The possession by an organism of sensors which encode information about its environment is an inadequate warrant for saying that it is capable of cognitive mental states.

S.6 The fact that an organism can sense objects in its environment is an inadequate warrant for saying that it is capable of cognitive mental states, even if the organism's senses are of the sophisticated kind found only in "higher" animals.


Memory-related criteria for the identification of mental states

Definition - "memory"
The term memory refers to any capacity for storing information.

M.1 All cellular organisms possess some kind of memory capacity, which enables them to detect changes in their environment.

M.2 The existence of memory in an organism is not a sufficient ground for ascribing cognitive mental states to it.

M.3 The chemical memory of bacteria can be adequately described using a goal-centred intentional stance.

M.4 The distinction between procedural (non-declarative) and declarative memory - "knowing how" versus "knowing that" - appears to be a fairly robust one.

M.5 Procedural memory remains poorly defined in the scientific literature.

M.6 Procedural memory appears to be common to all animals and possibly some other eukaryotes, but does not occur in prokaryotes. Declarative semantic memory is found in mammals, birds and some insects. The existence of episodic memory in non-human animals remains unproved.

M.7 There can be no scientific or philosophical justification for attributing beliefs and desires to an organism lacking memory. The existence of memory capacity in an organism is a necessary condition for ascribing cognitive mental states to it.

M.8 There can be no scientific or philosophical justification for attributing beliefs and desires to an organism lacking procedural memory. In other words, procedural memory is a necessary condition for the attribution of mental states.

M.9 As there have been no credible claims that prokaryotes (bacteria and archaea) possess any kind of procedural memory, we can assume that they do not have beliefs or desires.

M.10 Procedural memory is not a sufficient condition for the attribution of mental states.


Flexible behaviour and the identification of mental states

F.1 Modifiable behaviour occurs among all cellular organisms. Specifically, in any cellular organism, the reaction to a stimulus is always indirect and modifiable (through the addition or removal of other stimuli). Justification.

Definition - "fixed pattern"
We can mathematically represent a pattern of behaviour in an organism by an output variable (say, z). A fixed pattern can be defined as a pattern where the value of the output variable z remains the same over time, given the same values of the input variables.

F.2 Behaviour by an organism which conforms to a fixed pattern or rule is not a sufficient warrant for ascribing cognitive mental states to that organism, even if stimulus-response coupling is indirect and modifiable (by the addition or removal of other stimuli). Justification.

Definition - "flexible behaviour"
If a program governing some aspect of an organism's behaviour changes over time, such that the value of an output variable z is no longer the same for the same inputs, whether because of a change in the function(s) which define the value of z, or the parameters of the function(s), or the conditions in the program under which the function(s) are invoked, then the behaviour described by z is flexible.

F.3 The occurrence of flexible behaviour in an organism is a necessary condition for the warranted ascription of cognitive mental states to it. (Corollary of F.2.) Justification.

F.4 All organisms exhibit flexible behaviour, to some degree. Justification.

F.5 The occurrence in an organism of flexible behaviour does not provide a sufficient warrant for the ascription of mental states to it. Justification.

F.6 Internally generated flexibility of behaviour (i.e. the ability to modify patterns of information transfer, by means of an inbuilt mechanism) is a necessary condition for the existence of cognitive mental states in an organism. Specifically, flexible behaviour by an organism must be internally generated (i.e. the organism must be able to modify its patterns of information transfer, by means of an inbuilt mechanism) before it can be regarded as a manifestation of a cognitive mental state.Justification.

F.7 Internally generated flexible behaviour appears to be confined to organisms with central nervous systems. It is found in most but possibly not all phyla of animals with central nervous systems. (Flatworms may not be capable of it, but many other phyla of worms are.)

F.8 The presence in an organism of flexible behaviour patterns that are acquired through an internal mechanism does not provide a sufficient warrant for our being able to ascribe cognitive mental states to it. Justification.


Learning criteria and the identification of mental states

Some preliminary definitions relating to learning
Learning (as defined by behavioural psychologists): "[a] relatively permanent change in behaviour potential as a result of experience" (Abramson, 1994, p. 2). Learn (as defined by the Merriam-Webster Online dictionary, 2004): to gain knowledge or understanding of or skill in by study, instruction, or experience.

1. Non-associative learning: "those instances where an animal's behaviour toward a stimulus changes in the absence of any apparent associated stimulus or event (such as a reward or punishment)"(Encyclopedia Britannica, 1989). Only one kind of event (the stimulus) is involved in this kind of learning.

1(a) Habituation: the decline of a response "as a result of repeated stimulation" (Abramson, 1994, p. 106).

1(b) Sensitization: "the opposite of habituation and refers to an increase in frequency or probability of a response" to a stimulus (Abramson, 1994, p. 105).

1(c) Dishabituation: a "facilitation of a decremented or habituated response" (Rose and Rankin, 2001, p. 63).

2. Associative learning: A form of behaviour modification involving the association of two or more events, such as between two stimuli, or between a stimulus and a response. In associative learning, an animal does learn to do something new or better (Abramson 1994, p. 38, italics mine).

2(a) Classical conditioning: refers to the modification of behavior in which an originally neutral stimulus - known as a conditioned stimulus (CS) - is paired with a second stimulus that elicits a particular response - known as the unconditioned stimulus (US). The response which the US elicits is known as the unconditioned response (UR). An organism exposed to repeated pairings of the CS and the US will often respond to the originally neutral stimulus as it did to the US (Abramson, 1994, p. 39).

2(b) Instrumental conditioning and operant conditioning are "examples of associative learning in which the behavior of the animal is controlled by the consequences of its actions... [Whereas] classical conditioning describes how animals make associations between stimuli, ... instrumental and operant conditioning describe how animals associate stimuli with their own motor actions ... Animals learn new behaviours in order to obtain or avoid some stimulus (reinforcement)" (Abramson, 1994, p. 151).

Conclusions
L.1 Habituation and sensitization appear to be confined to eukaryotes, or organisms with a nucleus in their cells. Justification.

L.2: The existence of memory in an organism is a necessary but not a sufficient condition for learning. Justification.

L.3: Learning should not be attributed to an organism unless it displays a change in its pattern of behaviour which it is able to reproduce on a subsequent occasion. Justification.

L.4 The ability of an organism to display flexible behaviour is a necessary condition for learning.

L.5 The ability of an organism to undergo habituation and sensitization is not a sufficient condition for learning. Justification.

L.6 The occurrence of non-associative habituation and sensitization in an organism does not provide a sufficient warrant for the ascription of mental states to it. (Corollary of Conclusion F.2.) Justification.

L.7 The occurrence in an organism of flexible behaviour is not a sufficient condition for learning. Justification.

L.8 The capacity for associative learning in an organism is a sufficient condition for its being able to engage in internally generated flexible behaviour. Justification.

L.9 Associative learning appears to be confined to organisms with central nervous systems. It is found in most but possibly not all phyla of animals with central nervous systems. (Flatworms may not be capable of associative learning, but many other phyla of worms are.) Justification.

L.10 The ability of an organism to undergo associative learning (classical and/or instrumental conditioning) is a sufficient condition for its being able to learn, in the proper sense of the word. Justification.

L.11 An organism must be capable of learning before it can be said to have cognitive mental states. Justification.

L.12 An organism must be capable of associative learning before it can be said to have cognitive mental states. Justification.

L.13 A capacity for learning in an organism does not provide a sufficient warrant for our being able to ascribe cognitive mental states to it. Justification.

L.14 A capacity for associative learning in an organism does not provide a sufficient warrant for our being able to ascribe cognitive mental states to it. Justification.

L.15 Neither an animal's capacity to undergo classical conditioning nor its ability to learn from instrumental conditioning, per se, warrant the ascription of cognitive mental states to it.

L.16 The occurrence of blocking in an organism does not provide a sufficient warrant for our ascription of cognitive mental states to it. Justification.

L.17 The occurrence of so-called higher-order forms of associative learning in an organism do not, taken by themselves, warrant the conclusion that it has cognitive mental states. Justification.

L.18 The capacity for rapid reversal learning in an animal does not, by itself, warrant the ascription of mental states to it. Justification.

L.19 Progressive adjustments in serial reversal tests constitute good prima facie evidence that an animal is trying to adjust to sudden changes in its environment, by rapidly revising its expectations. Justification.

L.20 An animal's ability to form categorical concepts and apply them to novel stimuli indicates the presence of mental processes - in particular, meta-learning. Justification.

L.21 An animal's ability to identify non-empirical properties is a sufficient condition for its having mental states (intentional acts). Such an animal can apply non-emprical concepts, by following a rule. Justification.


Criteria relating to action and the identification of mental states

A.1 Behaviour by an organism must vary in response to non-random internal states before it can be regarded as a manifestation of a mental state. Justification.

A.2 Behaviour by an organism must vary in response to its internal states, as well as external conditions, before it can be regarded as a manifestation of a cognitive mental state. Justification.

A.3 An organism must be capable of directed bodily movements before these movements can be regarded as a manifestation of a cognitive mental state. Justification.

A.4 All cellular organisms are capable of directed movement. Justification.

A.5 The occurrence of directed bodily movement in an organism does not provide a sufficient warrant by itself for the ascription of mental states to it. Justification.

A.6 A capacity for local movement (locomotion) in an organism is not a requirement for its possession of mental states. Justification.

Broad definition - "navigation"
Any organism that can use its senses to steer itself or a part of its body around its environment is capable of navigation.

A.7 An organism must be capable of navigation before its movements can be regarded as a manifestation of a cognitive mental state. Justification.

A.8 The occurrence of navigation and guiding sensors in an organism does not provide a sufficient warrant for the ascription of mental states to it. Justification.

Definition - "action selection mechanism"
An action selection mechanism in an organism may be defined as a repertoire of actions, combined with the ability to select the most appropriate one for the present circumstances.

A.9 An organism must have an action selection mechanism before it can be said to have cognitive mental states. Justification.

A.10 All cellular organisms possess an action selection mechanism of some sort. Justification.

A.11 The fact that an organism has an action selection mechanism does not provide a sufficient warrant for the ascription of mental states to it.

A.12 The fact that an organism has an action selection mechanism, sensors to guide navigation, and a nervous system with reflexes, does not provide a sufficient warrant for the ascription of mental states to it. Justification.

A.13 The presence of centralised action selection, sensors to guide navigation, and a central nervous system in an organism does not provide a sufficient warrant for the ascription of mental states to it. Justification.

A.14 An organism must be capable of fine-tuning its bodily movements before it can be identified as having cognitive mental states. Justification.

A.15 Only organisms with central nervous systems are capable of fine-tuning their bodily movements. Justification.


Representational criteria for the identification of mental states

R.1 A necessary condition for the ascription of beliefs to an organism is that it be capable of mis-representing events occurring in its surroundings. Justification.

R.2 The presence in an organism of Dretskean representations, defined as indicators acquired through learning which serve a biological function, does not provide a sufficient warrant for our being able to ascribe cognitive mental states to it. Justification.


Normativity criteria for the identification of mental states

N.1 An organism must be capable of self-correcting behaviour before it can be said to have cognitive mental states. Justification.


Sufficiency conditions for the identification of mental states

Definition - "minimal map"
A minimal map is a representation which is capable of showing:

A minimal map need not be spatial, but it must represent specific states.

Definition - Operant conditioning
DF.1 An animal can be described as undergoing operant conditioning if the following features can be identified:

(i) innate preferences or drives;

(ii) innate motor programs, which are stored in the brain, and generate the suite of the animal's motor output;

(iii) a tendency on the animal's part to engage in exploratory behaviour;

(iv) an action selection mechanism, which allows the animal to make a selection from its suite of possible motor response patterns and pick the one that is the most appropriate to its current circumstances;

(v) fine-tuning behaviour: efferent motor commands which are capable of stabilising a motor pattern at a particular value or within a narrow range of values, in order to achieve a goal;

(vi) a current goal: the attainment of a "reward" or the avoidance of a "punishment";

(vii) sensory inputs that inform the animal whether it has attained its goal, and if not, whether it is getting closer to achieving it;

(viii) direct or indirect associations between (a) different motor commands; (b) sensory inputs (if applicable); and (c) consequences of motor commands, which are stored in the animal's memory and updated when circumstances change;

(ix) an internal representation (minimal map) which includes the following features:

(a) the animal's current motor output (represented as its efference copy);

(b) the animal's current goal (represented as a stored memory of the motor pattern or sensory stimulus that the animal associates with the attainment of the goal); and

(c) the animal's pathway to its current goal (represented as a stored memory of the sequence of motor movements or sensory stimuli which enable the animal to steer itself towards its goal);

(x) the ability to store and compare internal representations of its current motor output (i.e. its efferent copy, which represents its current "position" on its internal map) and its afferent sensory inputs;

(xi) a correlation mechanism, allowing it to find a temporal coincidence between its motor behaviour and the attainment of its goal;

(xii) self-correction, that is:

(a) an ability to rectify any deviations in motor output from the range which is appropriate for attaining the goal;

(b) abandonment of behaviour that increases, and continuation of behaviour that reduces, the animal's "distance" (or deviation) from its current goal; and

(c) an ability to form new associations and alter its internal representations (i.e. update its minimal map) in line with variations in surrounding circumstances that are relevant to the animal's attainment of its goal.

If the above conditions are all met, then we can legitimately speak of the animal as an intentional agent which believes that it will get what it wants, by doing what its internal map tells it to do. Justification.

Definition - "Operant agency"
Since operant conditioning is a form of learning which presupposes an agent-centred intentional stance, animals that are capable of undergoing operant conditioning can thus be said to exhibit a form of agency called operant agency.


Definition - Navigational agency
The conditions are very similar to those for operant agency (listed above), with minor amendments.

DF.2 We are justified in ascribing agency to a navigating animal if the following features can be identified:

(i) innate preferences or drives;

(ii) innate motor programs, which are stored in the brain, and generate the suite of the animal's motor output;

(iii) a tendency on the animal's part to engage in exploratory behaviour, in order to locate food sites;

(iv) an action selection mechanism, which allows the animal to make a selection from its suite of possible motor response patterns and pick the one that is the most appropriate to its current circumstances;

(v) fine-tuning behaviour: efferent motor commands which are capable of steering the animal in a particular direction - i.e. towards food or towards a visual landmark that may help it locate food;

(vi) a current goal (long-term goal): the attainment of a "reward" (usually a distant food source);

EXTRA CONDITION:
(vi*) sub-goals (short-term goals), such as landmarks, which the animal uses to steer itself towards its goal;

(vii) visual sensory inputs that inform the animal about its current position, in relation to its long-term goal, and enable it to correct its movements if the need arises;

(viii) direct or indirect associations (a) between visual landmarks and local vectors; (b) between the animal's short term goals (landmarks) and long term goals (food sites or the nest). These associations are stored in the animal's memory and updated when circumstances change;

(ix) an internal representation (minimal map) which includes the following features:

(a) the animal's current motor output (represented as its efference copy);

(b) the animal's current goal (represented as a stored memory of a visual stimulus that the animal associates with the attainment of the goal) and sub-goals (represented as stored memories of visual landmarks); and

(c) the animal's pathway to its current goal, via its sub-goals (represented as a stored memory of the sequence of visual landmarks which enable the animal to steer itself towards its goal, as well as a sequence of vectors that help the animal to steer itself from one landmark to the next);

(x) the ability to store and compare internal representations of its current motor output (i.e. its efferent copy, which represents its current "position" on its internal map) and its afferent sensory inputs. Motor output and sensory inputs are linked by a two-way interaction;

[NOT NEEDED:
(xi) a correlation mechanism, allowing it to find a temporal coincidence between its motor behaviour and the attainment of its goal;]

(xii) self-correction, that is:

(a) an ability to rectify any deviations (or mismatches) between its view and its internally stored image of its goal or sub-goal - first, in order to approach its goal or sub-goal, and second, in order to keep track of it;

(b) abandonment of behaviour that increases, and continuation of behaviour that reduces, the animal's "distance" (or deviation) from its current goal; and

(c) an ability to form new associations and alter its internal representations (i.e. update its minimal map) in line with variations in surrounding circumstances that are relevant to the animal's attainment of its goal.

If the above conditions are all met, then the animal can be said to exhibit what I will call navigational agency. Such an animal qualifies as an intentional agent which believes that it will get what it wants, by doing what its internal map tells it to do. Justification.


Definition - Tool Agency
Once again, the conditions are very similar to those for operant agency (listed above), with minor amendments.

DF.3 An animal can be described as using a tool intentionally if the following features can be identified:

NEW CONDITION: a tool - that is, an item external to the animal, which it modifies, carries or manipulates, before using it to effect some change in the environment (Beck, 1980);

(i) innate preferences or drives;

(ii) innate motor programs, which are stored in the brain, and generate the suite of the animal's motor output;

(iii) a tendency on the animal's part to engage in exploratory behaviour, by using its tools to probe its environment;

(iv) an action selection mechanism, which allows the animal to make a selection from its suite of possible motor response patterns and pick the one that is the most appropriate for the tool it is using and object it is used to get;

(v) fine-tuning behaviour: an ability to stabilise one of its motor patterns within a narrow range of values, to enable the animal to achieve its goal by using the tool;

(vi) a current goal: the acquisition of something useful or beneficial to the individual;

(vii) sensory inputs that inform the animal whether it has attained its goal with its tool, and if not, whether it is getting closer to achieving it;

(viii) associations between different tool-using motor commands and their consequences, which are stored in the animal's memory;

(ix) an internal representation (minimal map) which includes the following features:

(a) the animal's current motor output (represented as its efference copy);

(b) the animal's current goal or end-state (represented as a stored visual memory involving a tool that the animal associates with attaining its goal; and

(c) the animal's pathway to its current goal (represented as a stored memory of a sequence of movements, coupled with sensory feedback, which allows the animal to steer its tool towards its goal);

(x) the ability to store and compare internal representations of its current motor output while using the tool (i.e. its efferent copy, which represents its current "position" on its internal map) and its afferent sensory inputs;

[NOT NEEDED:
(xi) a correlation mechanism, allowing it to find a temporal coincidence between its motor behaviour and the attainment of its goal;]

(xii) self-correction, that is:

(a) an ability to rectify any deviations in motor output from the range which is appropriate for attaining the goal;

(b) abandonment of behaviour that increases, and continuation of behaviour that reduces, the animal's "distance" (or deviation) from its current goal; and

(c) an ability to form new associations and alter its internal representations (i.e. update its minimal map) in line with variations in surrounding circumstances that are relevant to the animal's attainment of its goal.

If the above conditions are all met, then we can legitimately speak of the animal as an intentional agent which believes that it will get what it wants, by doing what its internal map tells it to do. Justification.


Definition - Agency in a social context
The conditions are similar to those for operant agency (listed above), but there are a few extra conditions.

DF.4 We are justified in ascribing agency to a social animal if the following features can be identified:

NEW CONDITION: a role model or knowledgeable individual;

NEW CONDITION: sensory capacities: the ability to discriminate between individual members of its own species (conspecifics), as well as between members and non-members of its group;

NEW CONDITION: memory capacity: the ability to keep track of the status of individuals within one's group, and remember one's past interactions with them (book-keeping);

NEW CONDITIONS: learning: the ability to learn from observing the behaviour of other individuals (observational learning) and to acquire new knowledge that is specific to one's group (traditions);

NEW CONDITION: representation: the ability to represent another individual in its group as a useful, reliable role model, to be followed in the pursuit of important objectives such as food;

(i) innate preferences or drives;

[NOT NEEDED? (ii) innate motor programs, which are stored in the brain, and generate the suite of the animal's motor output;]

(iii) a tendency on the animal's part to engage in exploratory behaviour;

(iv) an action selection mechanism, which allows the animal to make a selection from its suite of possible motor response patterns and pick the one that is the most appropriate to its current social setting;

(v) fine-tuning (controlled, modulated activity): the ability to model its behaviour on that of a knowledgeable individual (the role model), and to adjust its social behaviour to take account of differences between the individuals in its group, as well as changes in a given individual's behaviour;

(vi) a current goal for the animal, which is (at least qualitatively) the same as the goal which its role model is currently pursuing or has pursued in the past;

(vii) sensory inputs that inform the animal whether it has attained its current goal, and if not, whether it is getting closer to achieving it;

(viii) associations between stored memories of the different individuals in the animal's group and the (good or bad) consequences of following their example, as well as direct associations between different motor commands and their consequences, which are stored in the animal's memory;

(ix) an internal representation (minimal map) which includes the following features:

(a) the animal's current state, which includes both its current spatial relation to its role model, and its current motor output (represented as its efference copy);

(b) the animal's current goal (represented as a stored memory of a sensory stimulus which the animal associates with the attainment of that goal); and

(c) the animal's pathway to its current goal (represented as a stored memory of the individual which can reliably lead the animal to its goal - i.e. the role model);

(x) the ability to store and compare internal representations of its current motor output (i.e. its efferent copy, which represents its current "position" on its internal map) and its afferent sensory inputs;

[NOT NEEDED:
(xi) a correlation mechanism, allowing it to find a temporal coincidence between its motor behaviour and the attainment of its goal;]

(xii) self-correction, that is:

(a) an ability to rectify any deviations in its social behaviour from that which is appropriate for attaining its current goal;

(b) abandonment of social behaviour that proves to be unproductive (e.g. when the animal's expectations of another individual are disappointed), and continuation of behaviour that helps the animal obtain its current goal; and

(c) an ability to form new associations and alter its internal representations (i.e. update its minimal map) in line with variations in surrounding circumstances that are relevant to the animal's attainment of its goal.

If the above conditions are all met, then we can legitimately speak of the animal as an intentional agent which believes that it will get what it wants, by doing what its internal map tells it to do. Justification.


CONCLUSIONS FROM CHAPTER THREE

Definition: generic intentional object

I propose that the emotions we can meaningfully attribute to non-human animals are those that help them to survive and/or flourish. These emotions have a teleology: we can think of each kind of animal emotion as being "for" responding appropriately to a certain kind of biologically significant event, which is what the emotion is about. I propose to call this kind of event the generic intentional object of the emotion.

Defining features of animal emotions

Animal emotions can be defined by the following five features:

Cognitive requirements of animal emotions

The cognitive pre-requisites of animal emotions are identical with the requirements for intentional agency in animals, identified in chapter two.

We cannot identify emotions in animals until we have located behaviours which are best understood by adopting an agent-centred intentional stance. While emotional reactions are bona fide mental states, they do not (taken by themselves) warrant the ascription of mental states to animals. Since emotions are mental states, we have to first identify their occurrence in animals within the context of intentional agency before we can legitimately speak of them as accompanying animals' reactions.

Thus although emotions may sometimes occur in the absence of accompanying beliefs, the ascription of emotions to animals is only warranted for those kinds of animals that are capable of holding beliefs. (Reason: we can only ascribe emotions to animals that are capable of intentional agency, and intentional agency requires the occurrence of beliefs regarding the attainment or avoidance of the object.)

An animal's preference behaviour, by itself, does not warrant the ascription of emotions to animals, as this kind of behaviour can be described using a goal-centred intentional stance, and therefore does not require the ascription of mental states to the animal.

Regan's notion of a preference-belief (1988, p. 58) is therefore an insufficient basis for attributing beliefs to animals.

On the other hand, the attribution an emotion to an animal does not require it to have propositional attitudes. Strategic attitudes - "This works; that doesn't" - are what counts. We may, however, speak of animals' strategic beliefs as having a propositional content. The content of an animal's strategic beliefs relates to how it can pursue or avoid the intentional object of the emotion. (An animal may also have accompanying implicit background beliefs, whose content consists of those propositions entailed by the strategic beliefs it forms.) The attribution of emotions to an animal thus requires it to be an intentional agent that is capable of self-correcting behaviour, whereby it modifies its strategies for attaining its goals.

It follows from the above considerations that any animal which is capable of desiring ends (e.g. food or sex) must also be capable of desiring means to these ends. I therefore reject the possibility of an animal that only has simple desires, such as a dog's desire for a bone.

The intentional objects of animal emotions

Panksepp's (1998) brain-based, neurophysiological approach to animal emotions provide a complete explanation of their intentionality, or what they are "about". Rival, body-based accounts of emotion fail to adequately account for their intentionality; whereas cognitivist accounts which attempt to define emotions in terms of propositional attitudes that accompany them are too exacting in their cognitive requirements, and render problematic the very attribution of emotions to non-human animals.

On Panksepp's account, the basic kinds of emotions in animals arose in response to different kinds of environmental challenges their ancestors encountered. Meeting that challenge is what each kind of emotion is "about". An animal's response to each of these challenges is mediated by several emotion systems within its brain, easily identifiable to specialists, that give each kind of emotion its characteristic neurological "key signature". Each emotion system has evolved in response to a different environmental challenge.

I claim that Panksepp's approach can explain on a generic level how a physical state of affairs such as a brain state can be "about" something. I propose that there are two robust senses - one non-mentalistic and the other mentalistic - in which the various kinds of emotions are "about" the environmental challenges they evolved to meet. First, each environmental challenge has caused the evolution of a distinctive suite of emotional responses which are directed at it. This has been accomplished through natural selection over millions of years: an animal's emotional response to a challenge (e.g. jumping back at the sight of a snake) promotes its survival. Second, as emotions are capable of motivating intentional actions as well as reactions, we can say that each environmental challenge has caused the evolution of a kind of mental capacity which is specifically directed at it. (For instance, an animal's emotion of fear can motivates it to not only react, but also act intentionally, in a way that saves its life.) Thus the environmental challenges facing animals are not only the ultimate causes, but also the objects, of the different kinds of emotions in animals.

The adequacy of a neurophysiological account of emotions

Panksepp's neurophysiological account can also explain all of the key features of animal emotions we identified above. Justification.

What kinds of emotions do animals have?

The brain of every species of mammal contains various basic emotional systems. Panksepp (1998, pp. 48-49) defines these emotion systems in terms of the following features:

The seven emotional systems identified to date in mammals include "fear, anger, sorrow, anticipatory eagerness, play, sexual lust, and maternal nurturance" (1998, p. 47). These emotion systems constitute natural kinds.

Each emotion system has a different generic intentional object: namely, the environmental challenge it evolved to deal with. The fact that the instinctual motor outputs triggered by these systems can be subsequently modulated by cognitive inputs, implies that provided these systems satisfy the cognitive requirements for intentional agency (specified in chapter two), their response can be described as genuinely emotional and not merely reflexive.

Neurological criteria alone cannot establish the presence of mental states such as emotions in an animal. It also has to be demonstrated that each of the emotion systems defined above (according to neurological criteria) can motivate intentional agency before we can call it a true emotion.

Which animals have emotions?

Since the basal ganglia (the region of the brain where behavioural responses related to seeking, fear, anger and sexual lust originate), is well-defined in all vertebrates, and since we have seen in chapter two that fish (the simplest vertebrates) satisfy the requirements for intentional agency, we can safely assume that all vertebrates possess rudimentary emotions, regardless of whether these emotions are accompanied by phenomenal consciousness.


CONCLUSIONS FROM CHAPTER FOUR

The Question of Animal Consciousness

In this chapter, I use the term phenomenal consciousness to denote states with a subjective feel, such as the experience of seeing red (Block, 1997). Neuroscientists employ a closely related term, primary consciousness (also called "core consciousness" or "feeling consciousness"), which "refers to the moment-to-moment awareness of sensory experiences and some internal states, such as emotions" but excludes "awareness of one's self as an entity that exists separately from other entities" (Rose, 2002, p. 6). The main criterion used by scientists to verify the occurrence of primary consciousness in an individual is his/her capacity to give an accurate verbal or non-verbal report on his/her surroundings.

Philosophical distinctions regarding consciousness

Many contemporary philosophers argue that the question of which animals possess phenomenal consciousness can only be answered by carefully distinguishing it from other notions of "consciousness". My research on the subject of consciousness has led me to conclude that only one of the philosophical distinctions drawn between the various forms of consciousness - namely, the distinction between transitive creature consciousness and phenomenal consciousness - is of any help in resolving the Distribution Question (which animals are phenomenally conscious?). The other philosophical concepts of consciousness lack relevance, for one or more of the following reasons:

(a) although they may help to sharpen our philosophical terminology, they have no bearing on the Distribution Question;

(b) they are poorly defined;

(c) they are inapplicable to non-human animals;

(d) they fail to "carve reality at the joints" as far as animal consciousness is concerned - that is, they apply to too many animals (including animals that cannot plausibly be described as phenomenally conscious) or too few (e.g. humans and great apes only).

During the course of my research, I also found that the distinction between transitive and intransitive creature consciousness, which was supposed to be purely conceptual, turned out to be a real distinction.

I also discovered that what appeared to be a robust nomic connection between wakefulness (defined according to brain-based criteria) and phenomenal consciousness, had been entirely overlooked by philosophers, because of a conceptual distinction they had already formulated between these two notions of consciousness.

I propose three concepts of consciousness that I have uncovered in the scientific literature, which (I believe) do a better job of "carving reality at the joints" as far as animals are concerned than existing philosophical categories: (i) integrative consciousness (the kind of consciousness which gives an animal access to multiple sensory channels and enables it to integrate information from all of them); (ii) object consciousness (awareness of object permanence; ability to anticipate that an object which disappears behind an obstacle will subsequently re-appear); and (iii) anticipatory consciousness (ability to visually anticipate the trajectory of a moving object).

The contemporary philosophical debate about animal consciousness is split into several camps (see Lurz, 2003 for a summary). Common to all of the above positions is an underlying assumption: that the difference between phenomenally conscious mental states and other states can be formulated in terms of concepts which already exist within our language. I suggest that the "original sin" of philosophers who have formulated theories of phenomenal consciousness was to suppose that the requirements for possessing subjective awareness could be elucidated through careful analysis. In chapter 4, I reject this analytical approach, on the grounds that we still do not know what consciousness is, why it arose in the first place, or what it is for. Instead, I propose that we should start with what we do know. There is an abundance of neurological data relating to how consciousness originates in the brain. This data, I suggest, is the most promising avenue of inquiry for any philosophical investigation of consciousness.

Behavioural criteria for consciousness

The standard observational criterion used to establish the occurrence of primary consciousness in animals is accurate report (AR). Although there is very good evidence that monkeys and at least some other mammals and birds satisfy this criterion, I conclude that the evidence of accurate report in animals, while highly suggestive, cannot definitively establish that these animals possess phenomenal consciousness.

After examining three other categories of proposed behavioural indicators for consciousness - Panksepp's criteria for affective consciousness; the behavioural indicators for conscious pain; and hedonic behaviour in animals - I conclude that while some of the behaviours cited do indicate the occurrence of phenomenal consciousness, positive identification of a phenomenally conscious state cannot be made without either verbally interrogating the subject (as in some forms of accurate report) or checking that the behaviour is regulated by parts of the brain that are associated with phenomenal consciousness.

Since the interrogation of non-human animals is highly problematic (for reasons discussed in chapter 4), it follows that phenomenal consciousness in animals ultimately has to be defined as a neurological state in order for us to make some headway in identifying it (see Seth, Baars and Edelman, 2005). Behavioural indicators alone are too weak to settle the matter of which, if any, animals are phenomenally conscious. However, the combination of behavioural and neurological evidence constitutes a very powerful case for the occurrence of phenomenal consciousness in non-human animals.

Neural pre-requisites for consciousness

There are three major properties of consciousness that are fairly well accepted by neurobiologists (Seth, Baars and Edelman, 2005):

Only when neural activity in the brain stem reaches the cerebral cortex - the extensive outer layer of grey matter in the brain's cerebral hemispheres - does it translate into conscious awareness. Human consciousness appears to require brain activity that is diverse, temporally conditioned and of high informational complexity. The human neocortex (a laminated, six-layered structure that makes up the bulk of the cerebral cortex) satisfies these criteria because it has two unique structural features: (i) exceptionally high connectivity within the neocortex and between the cortex and thalamus;

(ii) enough mass and local functional specialisation to permit regionally specialised, differentiated activity patterns (Rose, 2002, p. 7). We are unaware of the perpetual neural activity that is confined to subcortical regions of the central nervous system, including cerebral regions beneath the neocortex as well as the brainstem and spinal cord (Rose, 2002, p. 6).

Brain monitoring techniques indicate that in human beings, only processes that take place within the associative regions of the neocortex are accompanied by consciousness; activities which are confined to the primary sensory cortex or processed outside the cortex are inaccessible to consciousness (Roth, 2003, pp. 36, 38; Rose, 2002, p. 15). The associative regions are distinguished by their high level of integration and large number of connections with other regions of the brain (Roth, 2003, p. 38).

Which animals are phenomenally conscious?

Mammals appear to satisfy all of the neurological requirements for primary consciousness: they engage in brain sleep, have re-entrant pathways between their thalamus and cortex, and possess a true neocortex. Since some of them (especially monkeys) have been shown to satisfy the requirements for non-verbal accurate report, and their basic emotions appear to be the same as ours, I conclude that a good case can be made that mammals are phenomenally conscious, on the basis of the combined neurological and behavioural evidence.

While the similarity arguments beloved by philosophers can be used to make a strong cumulative case that conscious feelings are widespread among mammals, the massive dissimilarities between the neocortex of the mammalian brain and the (apparently) much less complex structures in the brains of birds and reptiles effectively undermine any arguments for conscious feelings in these animals that are based on "similarity" alone.

I argue that when assessing consciousness in other animals, whose brains are different in design from our own, we have to rely on analogy. As functional analogies between the brains of mammals and other animals are incomplete at present, we cannot definitively conclude that they are phenomenally conscious. For instance, birds lack a neocortex and we cannot at present identify any structure in their brains which is homologous to a neocortex, or of comparable informational complexity. However, since (i) birds satisfy some of the neurological requirements for consciousness in mammals (e.g. brain sleep); (ii) the dorsal ventricular ridge (DVR) in reptiles and birds can be regarded as analogous to the mammalian neocortex, insofar as it serves as a principal integratory centre and exhibits a pattern of auditory and visual connections with sensory centres and the thalamus which is broadly similar to that of the sensory neocortex in mammals; and (iii) the behavioural sophistication of birds compares favourably with that of mammals (Chapell and Kacelnik, 2004; Emery and Clayton, 2004), we can say that the case for phenomenal consciousness in birds is highly suggestive. The evidence for phenomenal consciousness in reptiles is much weaker (but see Cabanac, 1999, 2003). Since fish and amphibians, whose brains are built according to the same basic design plan found in all vertebrates, lack anything that is even analogous to the mammalian neocortex (such as the integrative center, or dorsal ventricular ridge, found in reptiles and birds), and the available behavioural evidence also suggests that they are not phenomenally conscious (Cabanac, 2003, Rose, 2002, 2003a), we can formulate a counter-analogical argument that fish and amphibians are not phenomenally conscious, on account of the massive neural and behavioural disparities between these vertebrates and conscious mammals.

For the time being, the question of whether octopuses (whose brains are large and highly complex, but fundamentally different from those of vertebrates) are phenomenally conscious must remain speculative. Neurological arguments from analogy are not applicable here; the best we can do at present is formulate inferential arguments based on their observed behaviour. Edelman, Baars and Seth (2005) make some useful suggestions regarding future neurophysiological and behavioural research with these creatures. The brains of honeybees are probably too small to support phenomenal consciousness (David Edelman, personal email, 19 July 2004).

Ethical implications of animal consciousness

There are at least three distinct senses in which interests can be ascribed to creatures:

For some philosophers, a capacity for phenomenal consciousness is regarded as a sine qua non for having interests and being morally relevant. However, the above summary suggests that the ethical divide between mindless organisms and animals with minimal minds is a greater one than that between animals with minimal minds and phenomenally conscious animals, and the division between the simplest organisms and assemblages lacking intrinsic finality is greater still. Animals' interests, whether conscious or not, can be measured and which can be harmed by our actions. Despite their lack of phenomenal consciousness, we can legitimately speak of the welfare of fish, for instance.

We have a strong prima facie duty to refrain from treating phenomenally conscious animals cruelly, and the duty (under more restricted circumstances) to be kind to them. For companion animals, that would entail befriending them. Logically, any animals that lacked phenomenal consciousness could not serve as true "companions".

Which animals are rational?

What is rationality?

My concern in this section is with the philosophical usage of the term "rationality", as opposed to the broader usage by economists (who emphasise consistency of choice in maximising "utility", regardless of the process and the goal) and biologists, for whom rationality is the consistent maximisation of inclusive fitness across a set of relevant circumstances (Kacelnik, 2004). The standard philosophical definition of rationality emphasises the process by which decisions are made: rational beliefs are arrived at by reasoning, and "rational beliefs are contrasted with beliefs arrived at by emotion, faith, authority or arbitrary choice" (definition of "Rationality" by H. I. Brown in Oxford Companion to Philosophy, 1995, p. 744, cited in Kacelnik, 2004).

Kenny's argument against rationality in non-human animals

I evaluate the merits of what I consider to be the best philosophical argument against the possibility of rationality in non-human animals. Kenny (1975) formulates an argument, based on Aquinas, which assumes that that rationality pertains to means and ends and is distinguished from other forms of end-directed behaviour - such as the intentional agency I described in chapter two - by the agent's grasp of the connection between the means and the end. Kenny's argument is that when an animal does X in order to do Y, it does not do X for a reason, even though it is aiming at a goal in doing so. The animal, lacking a language, cannot give a reasonfor its actions, and therefore cannot be said to act for a reason.

I argue that Kenny's argument fails: strictly speaking, a rational act requires an agent to be able to justify its actions to itself, not to others. I also suggest that Kenny is mistaken in regarding language as the only tool that can render the agent's self-justification intelligible to others.

A model of animal rationality

The Ramseyan "map" metaphor which I used to characterise belief in chapter two is thus inadequate to define rationality: while a minimal map may enable an intentional agent to steer herself home by the shortest route, it does not require the agent to grasp the relevant properties of the means by which she accomplishes this task - including properties that the means does not currently possess.

On the account I am proposing, rational agency consists in pursuit of a goal which is governed not by a map but by a transformational model, constructed by the agent, which contains only those specific properties of the means that are suitable for realising the desired end, including some properties which the means does not yet possess. If we observe the agent to be fine-tuning the properties of her chosen means over an extended period of time, in a way that transforms the means into something ideal for realising her end, according to a model that we can recognise, then we are justified in calling her behaviour rational.

Are any non-human animals moral agents?

I argue that it is highly unlikely that non-human animals are capable of moral agency, for the folling reasons.

1. We cannot speak of morality in non-human animals unless parents can transmit moral norms to their offspring. Some non-human animals (especially canids and primates) appear to be capable of inculcating rule-following behaviour in their offspring, but I argue that they are actually teaching their young how to avoid a bad consequence. One vital difference between following a rule and avoiding a bad consequence is that a rule-follower understands that what causes the bad consequence is not the offending act itself, but the rule-enforcer's discovery of the act, coupled with her attribution of it to the offender. The act of following a rule therefore requires an individual to possess a human-like theory-of-mind, and be able to attribute to other individuals not only beliefs, but mistaken beliefs about other agents ("I won't get caught if she thinks someone else did it"). Even for chimpanzees, the evidence for such an ability is highly questionable at best (Hauser, Chomsky and Fitch, 2002; Nissani, 2004; Emery and Clayton, 2004).

2. A moral agent must be capable of evaluating and improving her conduct over the entire course of her life. To do this, she must possess an extraordinarily "thick" concept of time: she has to be able to recall her past actions in a temporal sequence, looking for signs of either progress or back-sliding, and formulate resolutions to improve her conduct in the future. An individual that lacked the ability to reflect on her past and future life would be morally paralysed, unable to diagnose her character faults or resolve to rectify them. In other words, moral agency requires not only an episodic memory, which some birds may possess in a rudimentary form (Emery and Clayton, 2004), but an autobiographical memory, which makes "mental time-travel" possible. Autobiographical memory is generally acknowledged to be a human specialty (Tulving, 2002).

3. Being moral involves more than acting virtuously; one must possess a certain virtue. To possess a virtue is to be a certain sort of person with a certain complex mindset (Hursthouse, 2003). The possession of appropriate attitudes towards morally significant individuals is an important part of the "complex mind-set" presupposed in morally virtuous behaviour. It is tempting to suppose that non-human animals could possess these attitudes implicitly, and that they might acquire them simply by following the example of role models in their group, even in the absence of language. However, this supposition overlooks two significant features of our moral attitudes: we can critically evaluate our own attitudes; and we can attempt to inculcate virtuous attitudes in other individuals. Unless animals can exercise these capacities, it is hard to see how their attitudes could be justifiably deemed "moral" or "immoral". One could hardly fault an animal for the "bad attitudes" it picked up simply from following its role models, if it lacked the cognitive apparatus for questioning its own attitudes, or correcting those of another individual. Both of these tasks can only be caried out on an explicit level, which, I argue, requires language in the "narrow" sense of the word described by Hauser, Chomsky and Fitch (2002) - i.e. the ability to construct arbitrarily long sentences, that allow language users to refer to each other's attitudinal concepts, criticise each other's bad attitudes and instil new moral norms. Only humans appear to possess this kind of language faculty.

I conclude that moral agency is only likely to occur in human beings.


CONCLUSIONS FROM CHAPTER FIVE

Which creatures do we have duties to?

From an ethical standpoint, the scope of the individuals to whom one ascribes intrinsic value may include (i) all (and only) human beings (anthropocentrism); (ii) all (and only) sentient beings (animal welfare and animal rights views) or (iii) all (and only) living organisms (biocentric individualism). In this chapter, I defend the last-named position (biocentric individualism). (The first two positions were rejected in chapter one because they overlook the biological interests of both sentient and non-sentient beings). The primary reason why harming a living thing is wrong is because harming it thwarts the realisation of its telos (Taylor, 1986). Thus biocentric individualism implies that each and every organism has a prima facie entitlement to be left alone and not be sacrificed to any individual's ends. (However, I also argue that this entitlement is massively defeasible.)

My contention that organisms have a good of their own also entails some form of deep ( as opposed to shallow) ecology (Naess, 1989).

Do we have duties to ecosystems?

I reject both pure holism, which ascribes moral value only to ecosystems and not to individuals, and pluralistic holism, which ascribes moral value to ecosystems as well as individual organisms. Briefly, I maintain that pure holism is misconceived, because ecosystems do not possess the kind of unity which warrants the ascription of interests to them, whereas living things do.As regards pluralistic holism, while I concede that an ecosystem has some kind of moral standing, insofar as it has a kind of flourishing and can benefit or be harmed, I argue that this moral standing is derivative upon that of the individuals whose interactions constitute it. Since ecosystems do not do any extra ethical "work", biocentric individualism is a more parsimonious hypothesis. From a practical standpoint, however, it can sometimes be more convenient to consider the morality of an action which will impact on various organisms, in terms of the benefits and harms to the ecosystem as a whole, rather than the benefits or harms to the individual members of the ecosystem.

Do we have the same duties to all organisms?

I reject the assumption that living organisms can be compared on a single sliding scale of intrinsic value, which is shared by those who contend that all organisms have equal intrinsic value and by those who argue that some organisms are more valuable than others. Instead, I argue that there are different dimensions of intrinsic value, as there are certain kinds of goods that can be realised by some organisms and not others. Some of these dimensions of value are shared by all organisms, some are restricted to those organisms possessing minimal minds, some are only instantiated by phenomenally conscious organisms, and some are only realised by moral agents. Because organisms instantiate different dimensions of value, we may have duties to some organisms that we do not have towards others.

I enumerate five kinds of goods which I refer to as basic animal goods: biologically grounded goods, practical goals, practical knowledge, the companionship of other individuals, and play. Basic animal goods are intrinsically valuable, as each of them can be desired for its own sake and is a legitimate object of pursuit (as shown by its biological usefulness). Additionally, none of the basic animal goods can be reduced to an aspect of the others. As such, the pursuit of these goods requires no argumentative justification: their goodness is self-evident, although it may be inappropriate to pursue these goods on particular occasions. Phenomenal consciousness is not needed to realise most of these basic animal goods. The ethical significance of phenomenal consciousness in a basic goods framework is thus relatively minor: having a minimal mind makes much more of a difference.

The satisfaction of wants belonging to these categories of basic animal good can be said to contribute to the animal's thriving, and thereby promote its telos, just as the thwarting of these wants causes a "failure to thrive".

Because animals with minimal minds are intentional agents and thus realise dimensions of value that other living things do not, I propose an ethical principle which I call the Wrongful Killing of Animals principle, or WKA:

The wrongful killing of an animal that has desires is a greater moral evil than the wrongful killing of an organism without a mind.

One can also formulate a similar claim about wrongful injury.

The foregoing principle does not imply that the killing of animals is necessarily wrong, but it does provide us with additional prima facie grounds for not taking their lives.

The Golden Rule can be construed broadly, as a prima facie obligation to refrain from harming other individuals ("Do not do to others what you would regard as harmful if done to yourself"), or more narrowly as an injunction to advert to their wishes ("Treat others as you would wish them to treat you") ). In the broad sense, it applies to all organisms; in a narrower sense, it applies only to those organisms with (conscious or non-conscious) desires. However, the Golden Rule does not require us either to assist animals in attaining their wishes, or to help other organisms to realise their ends.

Because animals with mental states can suffer, we have a prima facie obligation not to be cruel to them. Cruelty includes the deliberate frustration of animals' first-order desires (Carruthers, 1999). The infliction of cruelty upon animals, as an end in itself, is wrong in all circumstances.

What is the moral status of "marginal animal cases"?

I use the term "marginal animal cases" to refer to animals whose immaturity, genetic damage or physical injury precludes them (at least temporarily) from exercising agency or being sentient - e.g. animal embryos, anencephalic animals and decerebrated and permanently comatose animals. I defend an alternative view that marginal animal cases have the same moral status as their biologically normal counterparts because they have the same nature as their normal counterparts. More precisely, I argue that since an individual's telos determines its moral status, and each "marginal animal case" has a physically normal counterpart with the same telos, it follows that "marginal animal cases" have the same moral status as their normal counterparts.

What duties do we have to companion animals?

In addition to our general duties to animals, there is an additional category of obligations that we have towards companion animals, simply because they are friends whose company we may enjoy for its own sake. Insofar as we have assumed responsibility for looking after them, we have much stronger obligations towards them than towards other animals: we are not only required to refrain from hindering them in the pursuit of their proper ends, but also to offer them positive assistance in the pursuit of their ends, when needed. This positive obligation is however not an absolute one. I suggest, however, that owners of companion animals have an absolute obligation not to harm, injure or kill their companion animals, simply in order to procure or promote their own good - a proposal with interesting implications for the case of the dog in the lifeboat.

Do human beings have a special moral status?

Whether they are our friends or not, we have special obligations towards other human beings, on account of their unique telos which includes goods pertaining to moral agency. Among the goals desired by human beings, we can discern certain categories of good, whose realisation contributes to their well-being or thriving in a sense which is objective (in that their goodness is independent of the attitude of the subject pursuing them) and universal (good for everyone). I argue that that the list of basic human goods is much more extensive than the list of basic animal goods, and that (despite some differences between the basic human goods listed by various philosophers in the natural law tradition) there is broad general agreement about its contents. These basic human goods add extra "dimensions" to the wrongfulness of killing a human being, as compared with the wrongful killing of an animal, because the human being (as a moral agent) is robbed of much more. Since the killing of a non-moral agent does not interrupt the "story of a life", it may be permissible in a wider range of circumstances.

What is the moral status of "marginal human cases"?

I reject what Dombrowski (1997) calls the argument from marginal cases, according to which there are no morally relevant differences between animals and human beings who are not moral agents: either because they are very young, severely intellectually disabled or permanently comatose.Marginal human cases, I argue, have the same telos as their normal human counterparts. The telos of non-human animals is different, because it does not include ends that presuppose moral agency. Species membership per se has no moral significance, but it is morally significant in an indirect sense, insofar as it determines an organism's telos.

Finally, I argue that we may have obligations relating to ecosystems, even if we do not have any obligations towards them as such. At a minimum, one has a strong prima facie obligation not to destroy or jeopardise an ecosystem, because it contains organisms which have moral standing, on account of their telos, and many of these organisms have a long-term interest in the continuation of their ecosystem, since they cannot leave behind any descendants without it. Protecting a species is also good, not just because it preserves individuals, but because it preserves a whole way of being alive - a unique kind of telos. The death of a species destroys that way of life forever. While such a loss represents a (non-moral) evil, the (intentional) causation of such a loss is a prima facie moral evil.


CONCLUSIONS FROM CHAPTER SIX

In this chapter, I propose the following principles which define what goods human beings are entitled to pursue, and the conditions under which they may pursue them, even at the expense of other life-forms. I also propose a principle which would allow humans to inflict harm on other organisms in defence of an ecosystem.

The Telos-Promoting Principle (TPP):

Human beings are morally entitled to pursue any of the basic human goods comprising their telos, even at the expense of other organisms.

TPP should be seen as a restricted self-preference principle: it affirms that human beings are entitled to put their own interests first, while restricting the grounds on which people may do so to the pursuit of basic human goods. The harm which humans may inflict on other living things is limited by the following principle.

The Telos-Promoting Harm Principle (TPHP).

A human activity which is inherently harmful to other living things is justifiable if:

(i) the activity itself is one which inherently tends to promote or enable the realisation of a basic human good;

(ii) the harm done by the activity EITHER

(a) inherently tends to promote or enable the promotion of a basic human good, OR

(b) does not inherently promote a basic human good, but is merely an unintended consequence of the activity;

(iii) the performance of the act makes a significant difference to the agent's prospects of realising a basic human good;

(iv) the harm done to other creatures by the agent pursuing the basic human good is kept to a practicable minimum. In particular:

(a) among the various possible instantiations of the basic human good in question, there are no alternative instantiations which the agent has the chance to pursue (without jeopardising his/her opportunities to realise the other basic goods), and whose realisation would cause significantly less harm to other creatures; AND

(b) there are no other significantly less harmful ways available to the agent of achieving the particular instantiation of a basic good that he/she has selected;

(v) any project that the act is part of, also conforms to all the conditions of TPHP;

(vi) the performance of this act is not intrinsically immoral for any other reason.
In particular, performance of the act does not:

(a) threaten to irrevocably destroy the agent's capacity for some moral virtue, or

(b) contravene any over-riding duties to morally significant others, or

(c) violate anyone's basic rights, or

(d) involve destroying a person as a means to the agent's ends.

Notes: 1. Long-standing practices that have promoted the survival of the human race, as well as any future practices that may be necessary to the survival of the human race, are by definition virtuous in at least some circumstances, and therefore may not be regarded as intrinsically immoral.

2. The interests that other (non-rational) kinds of organisms have in realising their basic goods can never take precedence over human interests in achieving basic human goods, [except where a [prior claim is involved].

Although TPHP is a sufficient justification for harm inflicted on living things, it is not intended to be the sole justification. TPHP justifies the harm that an act inflicts on living things with reference to some basic human good achieved through the act. However, there is at least one other possible justification for inflicting harm on some living things: namely, the promotion of the basic interests of other living things.

The Justifiable Defence of Ecosystems Principle (JDEP)

JDEP allows us to inflict harm on living things for the sake of defending an ecosystem..

A human activity which inflicts harm on other living things is justifiable if:

JDEP offers us an ethically sound and realistic way to adjudicate between the competing interests of different species, while looking after ecosystems. The governing idea is not that we have duties to ecosystems as such, but that in addition to their competing short-term interests, nearly all of the organisms in an ecosystem have a long-term interest in keeping their ecosystem sustainable. The sustainability of an ecosystem is a convenient short-hand way of representing the combined interests of the individual organisms living in an ecosystem. If humans manage an ecosystem in accordance with JDEP, they are acting on behalf of these organisms.