Chapter 2 - What does it take to possess a Minimal Mind?

Part A - How should we look for intentional agency in organisms?

Back to Chapter 2 Appendix to Chapter 2 part A
Chapter 2 part B Chapter 2 part C Chapter 2 part D SUMMARY of Conclusions reached References

2.A.1Preliminaries

In the second chapter, entitled "What does it take to possess a Minimal Mind?", I address the issue of which organisms can be said to possess mental states, and what kind of features the most basic kind of mind would have to possess.

The "minimal mind" which I shall describe in this chapter is not one which experiences qualia ("raw feels", such as the quality of redness that one experiences when one looks at ripe tomatoes), let alone phenomenal consciousness - a richer concept, which "covers all the various kinds of order and structure found within the domain of ... the world as it appears to us" (Van Gulick, 2004). The answer to the lay-person's question, "Does a minimal mind have subjective feelings?" is "No, but there are good reasons for calling it a mind nonetheless".

In this chapter, I argue that because an animal with a minimal mind can (i) sense objects in its environment, (ii) remember new skills, (iii) flexibly update its own internal programs, which regulate its behaviour, (iv) learn to associate actions with their consequences, (v) control its own bodily movements by fine-tuning them, (vi) represent its current state, its goal and the pathway it needs to follow to get to its goal, and (vii) correct any mistakes it makes in its movements towards its goal, and correct factual mis-representations of its environment in the light of new information, it deserves to be called a bona fide agent. In part C, I show that such an animal can create (through a process under its control) its own internal representations of its movement towards its goals, and that these representations: (i) track the truth, insofar as they correct their own mistakes; (ii) possess map-like features, as typical beliefs do (which is why I call them minimal maps); and (iii) incorporate both means and ends, making agency possible. I argue that it is appropriate to characterise these representations as beliefs, and the animal's goals as its objects of desire.

One of the surprises of this chapter is that there are in fact four kinds of minimal minds. I attempt to define the necessary and sufficient conditions for what I call operant agency, navigational agency, tool agency and social agency in such a way that they can only be explained by adopting an agent-centred intentional stance. Any organism that is capable of learning to acquire one of these kinds of agency thus qualifies as having beliefs, desires and intentions.

Finally, I formulate tentative conclusions regarding which animals should be regarded as intentional agents.


2.A.2 Mental states - Aristotelian, Cartesian and modern positions contrasted

I should acknowledge at the outset that the quest for "mental states" comes with some philosophical baggage. As my investigation eschews pre-conceived notions of what "the mind" is, I shall simply set forth these views, and refrain from adjudicating between them until our investigation is complete.

Our modern terminology of mental states owes much to Descartes, who distinguished between activities or states requiring our attention and processes which can be performed absent-mindedly or while asleep. Descartes characterised processes of the former kind as "cogitative", or relating to thought. Descartes' conception of "thought" was meant to encompass all mental states, as he explained in the Principles of Philosophy (1644): "By thought I understand all that of which we are conscious as operating in us. And that is why not alone understanding, willing, imagining, but also feeling, are here the same thing as thought" (Haldane and Ross, 1970, I.222). Elsewhere, he wrote:

[T]here are ... characteristics which we call mental [literally cogitative, or relating to thought] such as understanding, willing, imagining, sensing, and so on. All these are united by having in common the essential principle of thought, or perception, or consciousness [conscientia, a state of being aware] (Descartes' Reply to Hobbes' Second Objection, translation and footnotes by Ross, 1975-1979).

By contrast, processes which can be performed absent-mindedly or while asleep, were excluded from the sphere of mental states, and were deemed to be "automatic".

This way of carving up the activities of organisms would have seemed highly unusual to Aristotle. Indeed, there was no term in his lexicon for what we would call "mental states". The term psuche (soul) will not do, as plants, which are said to lack perception, have a psuche because they are capable of being nourished (De Anima 2.4, 415a24-25, 415b27-28). Animals are characterised by virtue of their faculty of perception (aisthesis) (De Sensu 1, 436b10-12), but non-human animals are said to lack reason (logos) (De Anima 3.3, 428a4; Eudemian Ethics 2.8, 1224a27; Politics 7.13, 1332b5; Nicomachaean Ethics 1.7, 1098a3-4), reasoning (logismos) (De Anima 3.10, 433a12), thought (dianoia) (Parts of Animals 1.1, 641b7), belief (doxa) (De Anima 3.3, 428a19-24; On Memory 450a16) and intellect (nous - also translated as "mind") (De Anima 1.2, 404b4-6; all references cited in Sorabji, 1993, p. 14). Aristotle described nous (translated as "mind", but also rendered as "intellect" or "reason") as "the part of the soul by which it knows and understands" (De Anima 3.4, 429a9-10; cf. 3.3, 428a5; 3.9, 432b26; 3.12, 434b3). "[J]ust as the having of sensory faculties is essential to being an animal, so the having of a mind is essential to being a human" (Shields, 2003; see also Metaphysics 1.1, 980a21; De Anima 2.3, 414b18; 3.3, 429a6-8). Aristotle does not seem to have regarded perception and thought as even belonging to a common category (e.g. "knowledge", "cognition", "awareness" or "consciousness"). On the contrary, he sharply distinguished knowledge or cognition (gnosis) from perception (De Anima 3.8, 431b24), and apart from his discussion (De Anima 3.2) of how it is that we can perceive that we are seeing or hearing, seems to have said very little about what we would call "consciousness". The only term that Aristotle does apply to both perception and thought is krinein (De Anima 3.9, 432a16), which according to Ebert (1983) is best translated as discrimination , or a discerning activity.

According to the Cartesian schema, then, there is a fundamental divide between beings that have minds and those lacking them, whereas on Aristotle's view, there are three basic categories of organisms: those that are capable of being nourished, those that can discriminate between objects in their surroundings, and those that can know and understand.

The modern conception of mental events is somewhat broader than Descartes': it is now acceptable to speak of unconscious as well as conscious mental processes. Some philosophers (e.g. Searle, 1999, p. 88) differentiate between nonconscious and subconscious brain states, recognising only the latter as mental, because they are at least potentially conscious. Others (e.g. Lakoff and Johnson, 1999, p. 10) insist that "most of our thought is unconscious, not in the Freudian sense of being repressed, but in the sense that it operates beneath the level of cognitive awareness, inaccessible to consciousness and operating too quickly to be focussed on".


As we saw in the Introduction, there is also a considerable diversity of opinion about the existence and location of a clearcut boundary between entities that have minds and those that do not.

A common reaction to the philosophical debate over cognition among scientists is to shun the terminology that generates the debate. For instance, one scientist, who has published numerous papers on associative learning in fruit flies and snails, wrote to me that "the distinction into cognitive and non-cognitive has no heuristic value... In my construction of the world, I see no use of the word 'cognitive' (yet?)" (Bjoern Brembs, personal e-mail communication, 22 December 2002).

The methodology finally proposed in the Introduction for evaluating a claim that a certain kind of behaviour is indicative of a mental state was to invoke mental states to explain the behaviour of an organism if and only if doing so allows us to describe, model and predict it more comprehensively, and with as great or a greater degree of empirical accuracy, than alternative modes of explanation.

Earlier, I rejected an a priori approach to mental states as philosophically limiting: such an investigation runs the risk of omitting important evidence that may fall outside the narrow bounds of the investigator's definition. Nevertheless, we have to start looking somewhere in our quest for mental states. In my quest for mental states in organisms, I critically examine and make extensive use of two approaches in particular - the computational approach of Steve Wolfram and the intentional stance developed by Daniel Dennett.


2.A.3 Conclusions reached - a note to the reader

In the course of my investigation, I shall list and number my conclusions for ease of reference. I shall formulate conclusions of several different kinds:

Table 2.0 - Numbering system adopted for conclusions in chapter two
Category of conclusions Numbering system adopted
Conclusions relating to Steve Wolfram's computational description of cognitive mental states. C.1, C.2, etc.
Conclusions relating to Daniel Dennett's intentional stance and its implications for cognitive mental states. I.1, I.2, etc.
Conclusions relating to biological criteria for the possession of cognitive mental states. B.1, B.2, etc.
Conclusions relating to sensory capacities required for the possession of cognitive mental states. S.1, S.2, etc.
Conclusions regarding the relevance of memory to cognitive mental states. M.1, M.2, etc.
Conclusions regarding the relevance of flexible behaviour to cognitive mental states. F.1, F.2, etc.
Conclusions regarding the relevance of learning to cognitive mental states. L.1, L.2, etc.
Conclusions regarding what kinds of actions enable us to identify cognitive mental states. A.1, A.2, etc.
Conclusions regarding the relevance of representations to the possession of cognitive mental states. R.1, R.2, etc.
Conclusions regarding the relevance of normativity criteria to the possession of cognitive mental states. N.1, N.2, etc.
Definitions of the necessary and sufficient conditions for the exercise of various kinds of intentional agency. Df. 1, Df .2, and so on.


2.A.4 Wolfram's neo-animism: Are minds nothing more than computational devices?

Steve Wolfram (2002) argues that although the idea of animism - which he defines as the view "that systems with complex behavior in nature must be driven by the same kind of essential spirit as humans ... has been seen as naive and counter to progress in science", this idea is actually "crucial" to science (2002, p. 845).

Wolfram's espousal of what I would call a "neo-animist" position with regard to the occurrence of mind (or "intelligence", to use his preferred terminology) is a consequence of:

(i) a mathematical perspective on nature (that each and every entity in the natural world can be regarded as a problem-solving computational device, where the functions or mappings that take the entity from one state to another are the laws of nature, and the calculations or computations are the set of processes - or "pattern changes" - which occur inside the entity);

(ii) an empirical claim (that we live in a universe made up of systems that can only exist in a finite number of states);

(iii) a computational principle (the Principle of Computational Equivalence, which states that there is an upper limit to complex behaviour in a universe like ours - namely, that found in a universal Turing machine, which can be programmed to follow any finite-state rule);

(iv) a second empirical claim (that almost all systems found in the natural world are computationally equivalent to universal Turing machines in terms of the kind of calculations they can be used to perform - that is, they can be used to solve the same range of problems, given enough time and memory); and

(v) Wolfram's own philosophical view of what it means to be "intelligent": he proposes that the range of problems which an entity can be used to solve can be used as a measure of its intelligence, and rejects attempts to define an entity's intelligence in terms of its purposes or intentions, as these are typically too hard for outsiders to discern.

Whatever one thinks of Wolfram's reasoning (see Gray, 2003, for a mathematical critique; my own critical philosophical comments are contained in the Appendix), it is hard to disagree with the underlying idea that computation, which Wolfram defines broadly as behaviour that can be described by a rule, is a useful starting point for any discussion of mental states. We cannot discern any kind of meaning (let alone intelligence) in an entity's behaviour unless we can first recognise a pattern in it. This leads me to propose my first conclusion regarding computational criteria for cognitive mental states:

C.1 Our identification of computations in an entity, or rule-governed transformations that take it from one state to another, is a necessary condition for our being able to ascribe cognitive mental states to it.

The term "entity" is employed very loosely here, to cover individuals, their parts, aggregates or systems in general. The initial and final states can be regarded as the "input" and "output" of the computation.

The epistemological conclusion above imposes a condition on our being able to recognise intelligence: that we should never impute mental states to an entity whose behaviour is, from our standpoint, totally devoid of any underlying pattern. (There may well be entities whose behaviour is too complex for us to discern the rules underlying them. Wolfram's P.C.I. entails that our brains, being universal systems, should eventually be able to discover the rules, but "eventually" may be a lot longer than a human lifespan!)

Wolfram's insight that computations are ubiquitous in nature allows us to formulate a second conclusion regarding the range of entities performing computations:

C.2 All natural entities and natural processes can be described according to Wolfram's computational stance: that is, the set of natural entities which perform computations is universal.

Evaluation of Wolfram's arguments

In the Appendix I criticise Wolfram's arguments for excluding purpose from the definition of intelligence and suggest that Wittgenstein's notion of a form of life (Philosophical Investigations I. 19, 23) suggests a way of recognising the meaning of any intelligent utterances by non-human animals. I argue that Wolfram's definition of a system's intelligence in terms of the range of problems it can be used to solve leaves us unable to account for a basic distinction between messages and message-encoders (or decoders). We would only ascribe mental states (or minds) to the latter, but from Wolfram's perspective a system embodying a message might be computationally equivalent to its sender or recipient, and would thus qualify as "intelligent".

One way of distinguishing between minds and messages might be look at their intentional properties. Some philosophers (e.g. Searle, 1999) distinguish between the intrinsic intentionality of our mental states (roughly, what they are about) from the derived intentionality of our messages. Dennett (1997) rejects this distinction; nevertheless, but as his discussion of intentionality is perspicuous and is frequently cited in the literature, I propose to begin by examining what he calls intentional systems.


2.A.5 Dennett's intentional stance: Is mind a property of intentional systems?


A home thermostat is a simple example of an intentional system. Photo courtesy of howstuffworks.

Dennett (1997, pp. 34 - 49) argues that we can regard all organisms - and, for that matter, many human artifacts - as what he calls intentional systems: entities whose behaviour can be predicted from an intentional stance, where the entities are treated as if they were agents who choose to behave in a certain way, because of their underlying beliefs about their environment, and their desires. As Dennett puts it, intentional systems exhibit the philosophical property of aboutness: for instance, beliefs and desires have to be about something. I may believe that the food in front of me is delicious: I have a belief about the food, and a desire relating to it (a desire to eat it). The food is the intentional object of my belief and desire - even if it turns out that the object I had presumed to exist, does not (e.g. if the "food" is really plastic that has been molded, painted and sprayed with volatile chemicals, in order to make it look and smell like delicious food).

Dennett suggests that we can usefully regard living things and their components from an intentional stance, because their behaviour is "produced by information-modulated, goal-seeking systems" (p. 34):

It is as if these cells and cell assemblies were tiny, simple-minded agents, specialized servants rationally furthering their particular obsessive causes by acting in the ways their perception of circumstances dictated. The world is teeming with such entities, ranging from the molecular to the continental in size and including not only "natural" objects, such as plants, animals and their parts (and the parts of their parts), but also many human artifacts. Thermostats, for instance, are a familiar example of such simple pseudoagents (1997, pp. 34 - 35).

Elsewhere, Dennett elaborates his reasons for regarding a thermostat as an intentional system:

...it has a rudimentary goal or desire (which is set, dictatorially, by the thermostat's owner, of course), which it acts on appropriately whenever it believes (thanks to a sensor of one sort or another) that its desire is unfulfilled. Of course you don't have to describe a thermostat in these terms. You can describe it in mechanical terms, or even molecular terms. But what is theoretically interesting is that if you want to describe the set of all thermostats ... you have to rise to this intentional level... [W]hat ... thermostats ... all have in common is a systemic property that is captured only at a level that invokes belief-talk and desire-talk (or their less colorful but equally intentional alternatives; semantic information-talk and goal-registration-talk, for instance) (1995a).

The chief advantage of the intentional stance, as Dennett sees it, is its predictive convenience. There are two other methods of predicting an entity's behaviour: what Dennett calls the physical stance (using scientific laws to predict the outcome - e.g. the trajectory of a bullet fired from a gun), and the design stance (assuming that the entity has been designed to function in a certain way, and that it is working properly - e.g. that a digital camera will take a picture when I press the button). The latter stance saves time and worry if the inner workings of the entity in question are too complex for behaviour to be rapidly predicted from a physical stance. Sometimes, however, even an entity's functions may be bafflingly complicated, and we may try to predict its behaviour by asking: what does it know (or at least, believe) and what does it want? The example Dennett employs is that of a chess-playing computer. I may not understand its program functions, but if I assume that it wants to win and knows where the pieces are on the board, how to move them and what the consequences of each possible move will be (up to a certain number of moves ahead), then I can make a good guess (perhaps a wrong one, given the limits of my memory and imagination) as to what it will do next in a game.

Regarding minds in general, the thesis of Dennett's book, Kinds of Minds, can be summarised as follows:

Dennett's third thesis has been hotly contested, and I will discuss it below.

I shall evaluate Dennett's intentional stance, by addressing three relevant issues. First, has Dennett mis-described intentionality? Second, is his intentional stance a global theory of mental states? Third, is it tied to any philosophically contentious theories - in particular, reductionism - or can it be used by philosophers of all persuasions?

Later, I shall argue that Dennett's intentional stance, while philosophically fruitful, does not adequately describe the necessary conditions for the occurrence of mental states, as it overlooks the crucial distinction between living and non-living systems: the latter, I contend, are ineligible for possessing mental states. Additionally, I propose that Dennett's intentional stance can be described in two ways, and that this suggests a rough program for distinguishing mental states from other states - and hence, distinguishing entities which possess minds from those that lack them.

2.A.5(a) Has Dennett mis-described intentionality?


An AIM-9 Sidewinder heat-seeking missile. According to David Beisecker, it has the wrong kind of intentionality for mental states. Image courtesy of Steen Skov and Turbo Squid.

Beisecker (1999) has challenged the generally accepted account of intentionality:

The intentionality thought to be so definitive of mental states is typically glossed in terms of aboutness or directedness toward objects. The term 'intentionality' derives from a Latin word meaning roughly "to aim" - as one might do with a bow...

But then again, things we're not prepared to credit with thought - for example, heat-seeking missiles and sunflowers - also exhibit directedness towards objects. The challenge then is to find a way to distinguish the special sort of directedness possessed by bona fide thinkers from the more primitive kinds exhibited by these simpler systems (1999, p. 282).

Beisecker offers his own suggestion: "the hallmark of intentional states is their normativity, or susceptibility to evaluation" (1999, p. 283). However, Beisecker is forced to admit that "there is a sense in which artifacts are susceptible to evaluation, and thus possess a certain sort of intentionality" (1999, p. 288): they can fail to fulfill the purpose for which they were designed. For Beisecker, this kind of "intentionality" is purely derivative and hence "second-class", but the point I wish to make here is that the same point could be made using Dennett's version of the intentional stance: the "beliefs" we metaphorically ascribe to thermostats are derivative upon their design specifications. Thus Beisecker's account is vulnerable to the same kinds of criticisms he directs at the notion of "aboutness": it includes not only mental states, but other phenomena as well.

Thus intentionality is not definitive of mental states, according to either Dennett's account or Beisecker's: other things also possess it. However, at this stage of our investigation, I would regard it as prejudicial to even attempt an a priori definition of mental states, before we have looked at organisms and their capacities. Rather, we should cast our net wide and attempt to describe a class of phenomena which contains all mental states, even if it includes much else besides.

Since, as Beisecker himself acknowledges, intentionality is etymologically related to "aboutness" and has historically been defined in those terms, I propose to retain the notion of "aboutness" as a useful starting point for discussing intentionality, without endorsing Dennett's philosophy of mind as such. The traditional notion of intentionality is employed by Dennett's philosophical friends and foes alike.

I shall, however, re-visit Beisecker's normativity criterion at a later stage in this chapter, since Beisecker applies it to the vital question of whether animals possess genuine intentionality.

But before we can apply the traditional notion of intentionality to mental states, we have to ask: does it apply to all mental states, or are there some that lack the property of "aboutness"?

2.A.5(b) Is Dennett's intentional stance a global theory of mental states?

Dennett has performed a valuable service, by providing a perspective within which we can situate mental states, and telling us where to start looking for them: on his theory, we should start by looking for behaviour that can be described by the intentional stance.

Of course, if there are some mental states that cannot be described by the intentional stance, then Dennett's thesis is in trouble. One might argue that there are mental states, such as perceptions and drives, which are too primitive to be characterised in the terms of beliefs and desires, which Dennett uses to characterise this stance. However, such a criticism misses the point. As Dennett's example of the thermostat shows, even a mechanical sensor can be described using the intentional stance: it switches on whenever it believes that the room is too hot or cold. In fact, Dennett (1995a) is famous for allowing that thermostats do indeed have "beliefs", because he construes "beliefs" in a "maximally permissive" sense as "information-structures" that are "sufficient to permit the sort of intelligent choice of behavior that is well-predicted from the intentional stance". Moreover, as Dennett argues, perceptual states (such as recognising a horse) exhibit aboutness, even if they are involuntary or automatic. A perception is always a perception of something. In other words, perceptions exhibit the property of aboutness or intentionality (1997, pp. 48 - 49). The same could be said for drives: they are towards something.

Emotions may sometimes lack the property of aboutness: one may feel depressed or elated for no particular reason. However, as de Sousa (2003) points out, these feelings cannot serve as paradigm cases, as the different kinds of emotions can only be distinguished by specifying their formal objects:

A formal object is a property implicitly ascribed by the emotion to its target, focus or propositional object, in virtue of which the emotion can be seen as intelligible. My fear of a dog, for example, construes a number of the dog's features (its salivating maw, its ferocious bark) as being frightening, and it is my perception of the dog as frightening that makes my emotion fear, rather than some other emotion. The formal object associated with a given emotion is essential to the definition of that particular emotion (de Sousa, 2003).

It is worth noting that even Dennett's severest critics, such as Searle (1999), do not dispute his contention that the intentional stance is applicable to all kinds of minds. Is it also applicable to systems which lack minds? Searle and Dennett differ here: Searle does not ascribe intentionality to these systems, because for him, intentionality is "the general term for all the various forms by which the mind can be directed at, or be about, or of, objects and states of affairs in the world" (1999, p. 85, italics mine), while for Dennett, intentionality refers to the simple property of being about something else, whether the entity exhibiting intentionality is a mind or not (1997, pp. 46-47). Even opioid receptors in the brain, to use one of Dennett's examples, are "about" something else: they have been "designed" to accept the brain's natural pain-killers, endorphins. Anything that can "embody information" possesses intentionality (1997, p. 48).

The difference here between the two positions appears to be mainly terminological. Searle concedes that mindless systems may exhibit what he calls "as-if intentionality": they behave as if they had genuine (i.e. mindful) intentionality, and can be metaphorically described as such (1999, p. 93). The real point at issue between Searle and Dennett (to be discussed in part (d) below) is whether the intentionality of our mental states is a basic, intrinsic feature of the world, or whether it can be reduced to something else.

In any case, Dennett's intentional stance certainly opens up a fruitful approach to the investigation of other minds - be they human, alien or animal ones - and it also seems to be a useful tool for describing the mind-like behaviour of "pseudo-agents".

Being an intentional system, then, is a necessary but not sufficient condition for having a mind. It is not a sufficient condition, because there are many things - such as thermostats and biological macromolecules - which are capable of being described by this stance, but are not agents. Dennett refers to such entities as "pseudoagents" (1997, p. 35). In our quest for mental states, we should start by looking for "effects produced by information-modulated, goal-seeking systems" (1997, p. 34), which may either be minds or "as-if" minds.

2.A.5(c) Is Dennett's intentional stance tied to reductionism?


A DNA molecule. John Searle objects to Dennett's claim that intentional agency in human beings is grounded in the pseudo-agency of the macromolecules in their bodies. Picture courtesy of Columbia University.

At the outset of my quest for mental states in animals and (possibly) other organisms, I committed myself to an open-ended investigation, which avoided making philosophical assumptions about the nature of "mind" or "mental states". If Dennett's intentional stance turned out to be wedded to a particular, contentious account of "the mind", then its legitimacy would be open to challenge from the outset.

Certainly, Dennett does make one highly contentious reductionist claim: he claims (1997, pp. 27, 30-31) that intentional agency in human beings is grounded in the pseudo-agency of the macromolecules in their bodies. This claim has been contested by Searle, who argues (1999, pp. 90-91) that it is vulnerable to the homunculus fallacy. In its crudest version, the homunculus fallacy attempts to account for the intentional "aboutness" of our mental states by postulating some "little man" or "spectator" in the brain who deems them to be about something. Although Dennett does not account for the intentional "aboutness" of our mental states in this way, he does attempt to solve the problem by taking it down to a lower biological level, where the problem of "aboutness" is said to be disappear: the intentionality of our mental states is the outcome of the mini-agency of the macromolecules in our bodies, and the intelligent homunculus is replaced by a horde of "dumb homunculi", each with its own specialised mini-task that it strives to accomplish (Dennett, 1997, pp. 30-31). Searle (1999, pp. 90-91) argues that this move merely postpones the problem: what gives our macromolecular states the intentional property of "aboutness"? Nor does Searle think much of causal accounts of "aboutness", where the intentionality of our symbols is said to be due to their being caused by objects in the world. The fatal objection to causal accounts is that the same causal chains may generate non-intentional states as well (1999, p. 91).

I would like to add that while Dennett's use of the intentional stance to describe the behaviour of the macromolecules in our bodies is pedagogically useful, it overlooks an important feature of rationality: he pictures them as "specialized servants rationally furthering their obsessive causes" (1997, p. 35). The picture contains an inherent contradiction: obsession is a mark of irrational rather than rational behaviour. The obsessive "mini-goals" of the parts of an intentional system derive their significance from the goals which the system, considered as a whole, is "trying" to achieve (e.g. food or sex). The metaphor of rational agency, I would suggest, is properly applied to the organism as a whole, as the good of the parts subserves that of the whole. If we use the intentional stance in our quest for mindful behaviour, then, it is not sufficient to identify body parts in which this behaviour is manifested. It must also be shown that the entity behaves as a whole (i.e. as a body) whose parts are integrated in a fashion that can be described by the intentional stance.

The fundamental divide between Dennett and Searle on intentionality concerns whether there is such a thing as "intrinsic intentionality" (whereby our mental states have a basic property of "aboutness"), as distinct from "derived intentionality" (whereby "words, sentences, books, maps, pictures, computer programs", and other "representational artifacts" (Dennett, 1997, pp. 66, 69) are endowed with an agreed meaning by their creators, who intend them to be "about" something). For Dennett, the distinction is redundant because the brain is itself an artifact of natural selection, and the "aboutness" of our brain states (read: mental states) has already been determined by their "creator, Mother Nature", who "designed" them (1997, p. 70). This move by Dennett is something of a fudge: "Mother Nature" (to borrow Dennett's anthropomorphism) does not "design" or "intend" anything; it merely causes things to happen, and as Searle has pointed out, causation is insufficient to explain intentionality. Searle (1999, pp. 89-98), while agreeing with Dennett that intrinsic intentionality is a natural, biological phenomenon, insists that there is an irreducible distinction between constructs such as the sentences of a language, whose meaning depends on what other people (language users) think, and conscious mental states such as thirst, whose significance does not depend on what other people think. Mental states, and not human constructs, are the paradigm cases of intentionality, and it is just a brute fact about the natural world that these conscious states (which are realised as high-level brain processes), refer intrinsically. An animal's conscious, intentional desire to drink, to use one of Searle's examples, is a biologically primitive example of intrinsic intentionality, with a natural cause: increased neuronal firing in the animal's hypothalamus. "That is how nature works" (1999, p. 95). Searle thus eschews both mysterian (dualist) and eliminative (reductionist) accounts of intentionality.

Despite the fierce controversy that rages over the roots of intentionality and the reducibility of mental states, it is admitted on all sides of the debate that a wide variety of entities can be treated as if they were agents in order to predict their behaviour. This, to my mind, is what makes Dennett's intentional stance a fruitful starting point in our quest for bearers of mental states. The issue of whether mental states can be reduced to mindless, lower-level processes is independent of the question of whether the intentional stance can be used to search for mental states.

Conclusions reached

If the foregoing arguments are correct, then we may conclude that behaving according to the intentional stance is a necessary condition for possessing mental states that are identifiable by us:

I.1 Our ability to describe an entity's behaviour according to Dennett's intentional stance is a necessary condition for our being able to ascribe cognitive mental states to it.

The intentional stance may well describe a considerably smaller class of entities than Wolfram's "computational stance", as I shall call it. Computations, broadly construed, are ubiquitous in nature, but the stipulation of a rule that describes an entity's information processing behaviour need not imply that the behaviour has a goal as such. It simply means that the entity can transform some initial states (inputs) into final states (outputs). Our final conclusion on Wolfram's computational stance is a negative one:

C.3 Our ability to describe an entity's behaviour in terms of rules which transform inputs into outputs (as per Wolfram's computational stance) is not a sufficient warrant for our being able to ascribe cognitive mental states to that entity.

On the other hand, Dennett's claim that the behaviour of all organisms can be described according to the intentional stance appears uncontroversial, in the light of our discussion of intrinsic finality in the previous chapter:

I.2 The set of entities which can be described by Dennett's intentional stance is not universal in scope, but includes all organisms (and their parts).


2.A.6 Why only living things can possess minds. Implications for artificial intelligence.


Image of the AIBO robot. Courtesy of Sanoma Magazines, Finland Oy, MikroBitti, April 2001. "AIBO" is a registered trademark of Sony.

Before we engage on a quest for minds in living organisms, we need to examine the issue of artificial intelligence. I contend that while Dennett's intentional stance is a fruitful starting point in our search for minds, it overlooks one very important condition which an entity must satisfy before it can be said to possess mental states: the entity in question must be alive.

While Dennett has narrowed the search for embodied minds, his use of the intentional stance to describe the behaviour of some non-living artifacts blurs the philosophically important distinction (argued for in the previous chapter) between living and non-living things. On Dennett's account, there is no reason in principle why non-living artifacts could not exhibit genuine agency, as opposed to the pseudo-agency of a thermostat. I would argue that Dennett has overlooked the notion of intrinsic finality, and that an entity lacking this kind of finality cannot be said to embody mental states, let alone agency. It has been argued in the previous chapter that there are profound differences between a living and a non-living system: only a living system has intrinsic relations, dedicated functionality and a nested hierarchy of parts, which give it an intrinsic end and make it a true individual - something we can call a body, instead of an assemblage.

I contend that the attribution of a mind to a system that lacks intrinsic finality makes no sense. If we accept Dennett's notion of the intentional stance, then mental states can be appropriately regarded as manifestations of (genuine or pseudo-)agency, insofar as they exhibit the property of aboutness or intentionality (Dennett, 1997, pp. 48 - 49): they are directed at something. Now, agents are not free-floating entities, but are located in, and individuated with reference to bodies. The fact that we can tie agency to a body is what enables us to ascribe different actions to the same agent and to distinguish the pursuits of one agent from those of another. In chapter 1, it was argued that non-living systems are not bodies, but aggregates of parts which lack intrinsic unity. It is meaningless to describe the behaviour of such systems as the pursuits of an individual agent - although one might still imagine that a basic component of a non-living system, such as a molecule, could possess enough internal unity to manifest agency. (Such a molecule would at least possess internal relations, as described in chapter 1. Regarding the possibility of a living molecule, we have already concluded that a virus, which is little more than a DNA molecule wrapped in a protein coat, qualifies as being alive.)

There is, however, a deeper reason for scepticism regarding the notion of non-living agents. Before we can describe an entity as an agent with intentions of its own, it is always proper to ask: what are the entity's ends or goals? In the absence of identifiable ends, one might as well suppose that a cup of coffee is an agent. (Of course, the process by which we identify an agent's ends or goals may not be an infallible one - spies, for instance, are very good at concealing their ends from investigators.) And if the entity had a maker or master, the entity's ends would have to be (at least potentially) separable from those of its maker or master, before it could be called an agent. Without ends of its own, the entity would be nothing more than a tool. To qualify as an agent, an entity has to have some capacity for "self"-ish behaviour - i.e. behaviour that serves its own internal ends.

Why a coffee cup could never be an agent

The last point is crucial: even if we could interrogate an exotic non-living agent about its goals, how would we know that its answers were indeed its own? A humorous hypothetical will illustrate my point. Suppose that you stumbled across a talking coffee cup, and (once you had recovered from the shock), asked it about its goals. Suppose that the coffee cup's stated goal turned out to be a very altruistic one - peace in the Middle East. The sceptical question I wish to pose is: how could you know that you were talking to it and not some agent controlling it - via a microphone cleverly embedded in the cup, for instance? Unless the cup could be shown to possess at least some intrinsic or "selfish" ends, and could benefit from satisfying these ends, then there would be no reason to regard it as a bona fide agent. And in order to identify those ends, one would have to look for formal features (such as a master program directing the interactions between the parts, a nested hierarchy of organisation and dedicated functionality), which enable us to identify an individual as an organism possessing a good of its own. Additionally, one would have to identify the organism's basic needs or the essential conditions for its flourishing. In short: an agent has to be the sort of thing that can be said to benefit from what it does - in other words, possess a telos - even if it also has unselfish ends (like peace in the Middle East) that have nothing to do with its telos.

Man-made robots (such as AIBO, pictured above) and supercomputers are therefore doomed to remain mindless until and unless they acquire the properties of living things, identified in chapter 1.

The distinction between living and non-living systems is therefore presupposed by the distinction Dennett makes between intentional agents and pseudo-agents. Within the "family tree" of intentional systems, the most fundamental division is not between "agent" and "pseudo-agent", but between "alive" and "not alive". This is an important point to grasp, as it may seem that some non-living systems (e.g. chess-playing computers) are "cleverer" than many living systems (e.g. trees) and hence more like genuine agents. The point, however, is that trees are at least bona fide individuals with their own "selfish" ends like nutrition, whereas present-day human-built computers are assemblages without intrinsic ends, which can never exhibit agency, however well they may be programmed to mimic it.

If I am right, then we should restrict the search for mental states to organisms. Being alive is at least a necessary condition for having a mind. We can thus formulate a negative conclusion about Dennett's intentional stance, as well as a biological criterion for intelligence:

I.3 Our ability to describe an entity in terms of Dennett's intentional stance is not a sufficient condition for our being able to ascribe cognitive mental states to that entity.

B.1 An entity must be alive in order to qualify as having cognitive mental states.

This conclusion, unlike Conclusion C.1, is couched in absolute terms, rather than in terms of the limits of our knowledge. The point is that we can, most of the time, be certain that something is or is not alive, whereas the identification of all of an entity's computations is far less straightforward. Things that look simple may turn out to be complex.

However, stipulating "being alive" as a necessary condition for having mental states is methodologically vague. The following guidelines (based on chapter 1) serve to identify living things:

B.2 A necessary condition for our being able to ascribe cognitive mental states to an entity is that we can identify the following features:

(a) built-in biological needs, essential to its flourishing;

(b) a master program that regulates the internal structure of an organism and the internal interactions between its components;

(c) internal relations between the parts (i.e. new physical properties which appear when they are assembled together);

(d) a nested hierarchy of organisation of the parts;

(e) dedicated functionality, where the parts' repertoire of functionality is dedicated to supporting that of the unit they comprise;

(f) stability - the parts are able to work together for some time to maintain the entity in existence as a whole.

These conditions enable us to impute both a formal cause and a final cause or telos to the entity, and identify its "selfish" or intrinsic ends.

The following corollary of conclusion B.2 highlights the essential condition for the attribution of mental states, which was absent in our case of the talking coffee cup:

B.3 The presence in an individual of biologically "selfish" behaviour, which is directed at satisfying its own built-in biological needs, is an essential condition for the meaningful ascription of mental states to it.

If being alive is a necessary condition for having a mind (conclusion B.1), then the argument that if a non-living intentional system (such as a thermostat) is a "pseudoagent" then an organism with similar abilities need be nothing more, is undercut at once. The mere fact that an organism's actions are properly explained with reference to its intrinsic ends, or telos (which a thermostat lacks), is reason enough to treat the actions of the organism (but not the thermostat) as at least potential candidates for mental acts.

If my line of reasoning is correct, then any information-modulated, goal-seeking behaviour of an organism which is directed at the satisfaction of its biological needs is at least a prima facie candidate for being a manifestation of mental states. However, there may turn out to be valid philosophical reasons for concluding that only a subset of this behaviour warrants a mentalistic description. (These reasons will be discussed later in this chapter.)

Another corollary of conclusion B.2 is that mental states cannot be meaningfully imputed to a lineage of organisms, but only to individual organisms. Conclusion B.2 stipulates that we must be able to identify internal relations, a master program regulating the interactions between the parts, a nested hierarchy of organisation, and dedicated functionality, before ascribing cognitive mental states to an entity. An evolutionary lineage, unlike an individual organism, lacks all of these features.

B.4 An entity must be an individual biological organism in order to qualify as having cognitive mental states. An evolutionary lineage of organisms cannot be meaningfully described as having cognitive mental states.

We can now address Wolfram's sceptical question (2002, p. 827) of whether the wind could be said to embody an alien intelligence. Because the wind does not possess a nested hierarchy of organisation and lacks dedicated functionality, it cannot meaningfully be said to have intrinsic ends and qualify as a living individual - i.e. a body. Without a body, it cannot be said to possess mental states (Conclusion B.1).


2.A.7 Different kinds of intentional stance? Narrowing the search for mental states in organisms

I have argued that Dennett's intentional stance is a fruitful starting point in our quest for bearers of mental states. However, not all intentional systems have mental states. It has already been argued that non-living systems cannot meaningfully be credited with mental states, and there may be some organisms which also lack these states. It was suggested above that we should use mental states to explain the behaviour of an organism if and only if doing so allows us to describe, model and predict it more comprehensively, and with as great or a greater degree of empirical accuracy than other modes of explanation. If we can explain the behaviour of an intentional system just as well without recourse to talk of mental states such as "beliefs" and "desires", then the ascription of mental states is scientifically unhelpful.

It is my contention that our intentional discourse comes in different "flavours", some richer (i.e. more mentalistic) than others, and that Dennett's intentional stance can be divorced from the use of terms such as "beliefs" and "desires". It is important, when describing the behaviour of an organism, to choose the right "flavour" of discourse - that is, language that is just rich enough to do justice to the behaviour, and allow scientists to explain it as fully as possible.

Two intentional stances?

Dennett's use of terms such as "information" (1997, p. 34) and "goals or needs" (1997, pp. 34, 46) to describe the workings of thermostats (1997, p. 35), shows that intentional systems do not always have to be described using the mentalistic terminology of "beliefs", "desires" and "intentions", in order to successfully predict their behaviour. An alternative "language game" is available. There are thus at least two kinds of intentional stances that we can adopt: we can describe an entity as having information, or ascribe beliefs to it; and we can describe it as having goals, or ascribe desires and intentions to it.

What is the difference between these two intentional stances? According to Dennett, not much: talk of beliefs and desires can be replaced by "less colorful but equally intentional" talk of semantic information and goal-registration (1995). Pace Dennett, I would maintain that there are some important differences between the "information-goal" description of the intentional stance and the "belief-desire" description.

A goal-centred versus an agent-centred intentional stance

One difference between the two stances is that the former focuses on the goals of the action being described (i.e. what is being sought), while the latter focuses on the agent - in particular, what the agent is trying to do (its intentions). The distinction is important: often, an agent's goal (e.g. food) can be viewed as extrinsic to it, and specified without referring to its mental states. All the agent needs to attain such a goal is relevant information. A goal-centred intentional stance (which explains an entity's behaviour in terms of its goals and the information it has about them) adequately describes this kind of behaviour. Other goals (e.g. improving one's character, becoming more popular, or avoiding past mistakes) cannot be specified without reference to the agent's (or other agents') intentions. An agent-centred intentional stance (which regards the entity as an agent who decides what it will do, on the basis of its beliefs and desires) is required to characterise this kind of behaviour.

Narrowing the search for mental states: the quest for the right kind of intentional stance

It was suggested above that we should use mental states to explain the behaviour of an organism if and only if doing so allows us to describe, model and predict it more comprehensively, and with as great or a greater degree of empirical accuracy than other modes of explanation. Using Dennett's intentional stance, we can now clarify the task at hand in our search for entities with mental states. Having identified "mind-like" behaviour - i.e. behaviour which can be described using the goal-centred intentional stance, our next question should be: what kinds of mind-like behaviour, by which entities, are most appropriately described using an agent-centred intentional stance? The goal-centred stance is thus our "default" position. A switch to a mentalistic account (i.e. an agent-centred stance, which explicitly refers to beliefs and desires) is justified if and only if we conclude that it gives scientists a richer understanding of, and enables them to make better predictions about, the organism's behaviour.

Dennett approvingly cites the example of a logger who told him: "Pines like to keep their feet wet" (1997, p. 45). Describing the behaviour of pines from a mentalistic perspective is wholly appropriate in the domain of poetry. However, I maintain that the use of such mentalistic language by scientists is justified only if it furthers their understanding of how an entity functions, in a way that mind-neutral language could not. The sentence "Pines thrive on moisture", by contrast, implicitly acknowledges that pines have a good of their own, while avoiding unnecessary mentalism.

Conclusion B.3 above highlighted behaviour that satisfies an individual's biological needs as an essential condition of our being able to attribute mental states to it. Dennett's agent-centred intentional stance suggests a way of re-phrasing this conclusion which allows us to narrow our search for individuals with mental states:

I.4 Before we can attribute beliefs and desires to an organism, it must be capable of exhibiting behaviour which manifests its desires for its own built-in biological ends, as well as its beliefs about those ends.


2.A.8 Biological processes that are best described using a goal-centred intentional stance

Case study: viral replication

Image of influenza virus. Copyright Linda M. Stannard, Department of Medical Microbiology, University of Cape Town, 1995.

A mind-neutral intentional stance can be applied to the behaviour of viruses when invading cells:

Viruses ... have evolved defenses to help them evade the immune system. Viruses that cause infection in humans hold a "key" that allows them to unlock normal molecules (called viral receptors) on a human cell surface and slip inside.

Once in, viruses commandeer the cell's nucleic acid and protein-making machinery, so that more copies of the virus can be made (Emerson, 1998).

The ability of viruses to evade cell defences can be described using Dennett's intentional stance: they possess information (a "key") that enables them to enter and control their host, thereby achieving their goal (replication). But it has been argued above (see Conclusion I.3) that our ability to describe an entity using the intentional stance is, by itself, not a sufficient reason for imputing cognitive mental states to it. A mind-neutral goal-centred intentional stance suffices here to explain the behaviour of a virus in terms of its information and goals. An agent-centred mentalistic stance should not be adopted unless it enables us to make better predictions about viruses' behaviour.

The foregoing example allows us to strengthen Conclusion I.3 and formulate a further negative conclusion regarding Dennett's intentional stance:

I.5 Our ability to identify behaviour in an organism that can be described using the intentional stance is not a sufficient warrant for ascribing mental states to it.

The possibility of applying a mind-neutral intentional stance to the characteristic behaviour of organisms also has biological implications:

B.5 Being an organism is not a sufficient condition for having mental states.

We are almost ready to embark on our quest for non-human creatures with mental states. But before we proceed on our search, we need to address arguments which purport to show that any such quest is doomed to failure.


2.A.9 Philosophical arguments against the possibility of belief in non-human animals


Is this lion capable of believing that the ox it is about to eat is near?
Photo courtesy of Oxford University Development Programme, Wildlife Conservation Research Unit.

Some philosophers have attacked the very idea of attributing beliefs to animals as absurd. Sorabji (1993, pp. 12-14, 35-38) convincingly demonstrates that Aristotle himself (De Anima 3.3, 428a18-24) steadfastly refused to attribute belief (doxa) to animals, despite acknowledging their possession of sensory perception (aisthesis). However, Sorabji also points out that Aristotle's usages of both terms differed in important ways from the English terms "sensory perception" and "belief".

First, although Aristotle denied beliefs to animals, he allowed that they could have perceptions with a propositional content - e.g. the lion in Aristotle's Nicomachean Ethics (3.10) perceives that the ox it is about to eat is near - whereas in modern usage perceptions are typically regarded as simply having an object.

Second, for Aristotle, there can be no meaningful ascription of belief without the possibility of conviction and self-persuasion (De Anima 3.3, 428a18-24), whereas the same cannot be said for our English word belief: "The nervous examinee who believes that 1066 is the date of the Battle of Hastings may, through nervousness, not be convinced, and need not have been persuaded" (Sorabji, 1993, p. 37). The ascription of beliefs to non-human animals reflects contemporary linguistic norms that are far removed from those of Aristotle, who, were he alive today, might use a different term (such as personal convictions) to denote the states with a propositional content that only humans are capable of.

Some contemporary philosophers, on the other hand, have a more deep-seated objection to animal belief than Aristotle's: the ascription of any mental state with a propositional content (such as a belief) to a non-human animal is absurd, either because (i) the object of a belief is always that some sentence S is true, and lacking language, an animal cannot believe that any sentence is true (Frey, 1980), or because (ii) nothing in an animal's behaviour allows us to specify the content of its belief and determine the boundaries of its concepts (Stich, 1979, refers to this as the "dilemma of animal belief", p. 26), or because (iii) none of our human concepts can adequately express the content of an animal's belief, given its lack of appropriate linguistic behaviour that would confirm that our ascription was correct (Davidson, 1975).

An example from Dennett (1997, p. 56) illustrates this point. What does a dog think, just as it is about to eat? Does it think the thought that "My dish is full of beef", or the thought that "My plate is full of calves' liver", or even the thought that "The red, tasty stuff in the thing that I usually eat from is not the usual dry stuff they feed me"?

The common assumption underlying the above objections is that the content of a thought must be expressible by a that-clause, in some human language. Carruthers (2004) rejects this assumption on the grounds that it amounts to a co-thinking constraint on genuine thoughthood: "In order for another creature (whether human or animal) to be thinking a thought, it would have to be the case that someone else should also be capable of entertaining that very thought, in such a way that it can be formulated into a that-clause." This is a dubious proposition at best: as Carruthers points out, some of Einstein's more obscure thoughts may have been thinkable only by him.

A more reasonable position, urges Carruthers, is that an individual's thoughts can be characterised equally well from the outside (by an indirect description) as from the inside (by a that-clause which allows me to think what the individual is thinking):

In the case of an ape dipping for termites, for example, most of us would ... say something like this: I don't know how much the ape knows about termites, nor how exactly she conceptualizes them, but I do know that she believes of the termites in that mound that they are there, and I know she wants to eat them (Carruthers, 2004).

Dennett makes a similar point:

The idea that a dog's "thought" might be inexpressible (in human language) for the simple reason that expression in a human language cuts too fine is often ignored, along with its corollary: the idea that we may nevertheless exhaustively describe what we can't express, leaving no mysterious residue at all (1997, p. 56).

The point I wish to make here is not that animals are capable of having beliefs, but that the arguments that they are in principle incapable of doing so are open to reasonable doubt, and that the attempt to identify forms of animal behaviour that warrant description in terms of an agent-centred intentional stance is not a fool's errand.