What does it take to have a minimal mind?

Part C: Four models of a minimal mind

2.C.1. A model of operant agency in insects

Back to Chapter 2 Chapter 2 part A Chapter 2 part B Chapter 2 part D *** SUMMARY of conclusions reached References
Appendix


The fruit fly Drosophila melanogaster. Picture courtesy of Karolinska Institute, Sweden.

In this section, I present what I believe to be a list of sufficient conditions for the occurrence of intentional agency, in the context of operant conditioning. My proposals are based on Prescott (2001), Abramson (1994, 2003), Dretske (1999), Wolf and Heisenberg (1991), Heisenberg, Wolf and Brembs (2001), Brembs (1996, 2003), Cotterill (1997, 2001, 2002), Grau (2002), Beisecker (1999) and Carruthers (2004). Of particular relevance are the experiments described by Heisenberg (1991), Heisenberg, Wolf and Brembs (2001), and Brembs (1996, 2003), with the fruit fly Drosophila melanogaster at the flight simulator.

Flight simulator set-up. Taken from "An Analysis of Associative Learning in Drosophila at the Flight Simulator", Ph.D. thesis by Bjoern Brembs. In the experiments, a fruit fly is tethered to a computer. The fly is flying stationarily in a cylindrical arena homogeneously illuminated from behind. The fly has only one degree of freedom in its movements: its yaw torque, or tendency to perform left or right turns. The fly's tendency to perform left or right turns (yaw torque) is measured continuously and fed into the computer. The fly is then subjected to simple operant conditioning, classical conditioning, or a combination of the above (either flight-simulator mode or switch-mode). The computer controls pattern position (via the motor control unit K), shutter closure and color of illumination according to the conditioning rules.

Readers are strongly advised to familiarise themselves with the experimental set-up, which is described in section 1.1 of the Appendix to part C of chapter 2, before going any further.

Innate preferences

The experimental set-up depicted above for monitoring the operant behaviour of the fruit-fly (Drosophila melanogaster) is constructed on the assumption that fruit-flies have an innate aversion to heat, and will therefore try to avoid an infra-red heat-beam. The flies in the experiment face a formidable challenge: they have to "figure out" what to do in order to shut off a heat beam which can fry them in 30 seconds. The flies therefore satisfy a crucial condition for our being able to ascribe mental states to animals: they display selfish behaviour, which is directed at satisfying their own built-in biological needs (see conclusion B.3 in part A).

Innate Motor Programs, Exploratory Behaviour and Action selection

In the experiment, the tethered fruit fly is placed in a cylindrical arena which is capable of rotating in such a way as to simulate flight, even though the fly is stationary. The fly has four basic motor patterns that it can activate - in other words, four degrees of freedom. It can adjust its yaw torque (tendency to perform left or right turns), lift/thrust, abdominal position or leg posture (Heisenberg, Wolf and Brembs, 2001, p. 2).

The fly selects an appropriate motor pattern by a trial-and-error process of exploratory behaviour. Eventually, it manages to stabilise the rotating arena and prevent itself from being fried by the heat beam:

As the fly initially has no clue as to which behavior the experimenter chooses for control of the arena movements, the animal has no choice but to activate its repertoire of motor outputs and to compare this sequence of activations to the dynamics of arena rotation until it finds a correlation (Heisenberg, Wolf and Brembs, 2001, p. 2).

Prescott (2001, p. 1) defines action selection as the problem of "resolving conflicts between competing behavioural alternatives". The behavioural alternative (or motor pattern) selected by the fly is the one which enables it to avoid the heat. The fly engages in action selection when undergoing operant conditioning, and also when it is in "flight-simulator mode" and "switch-mode" (see section 1.1 of the Appendix to part C of chapter 2 for a definition of these terms).

Fine-tuning

A tethered fly has four basic motor programs that it can activate. Each motor program can be implemented at different strengths or values. A fly's current yaw torque is always at a particular angle; its thrust is always at a certain intensity, and so on. In other words, each of the fly's four motor patterns can be fine-tuned.

In the fruit-fly experiments described above, flies were subjected to four kinds of conditioning, the simplest of which is referred to by Brembs (2000) as pure operant conditioning and by Heisenberg, Wolf and Brembs (2001) as yaw-torque learning. But if we follow the naming convention I proposed in part B, the flies' behaviour might be better described as instrumental conditioning, as the essential ingredient of fine-tuning appears to be absent. As Heisenberg (personal email, 6 October 2003) points out, all that the flies had to learn in this case was: "Don't turn right." The range of permitted behaviour (flying anywhere in the left domain) is too broad for us to describe this as fine-tuning. Only if we could show that Drosophila was able to fine-tune one of its motor patterns (e.g. its thrust) while undergoing yaw torque learning could we then justifiably conclude that it was a case of true operant conditioning.

In flight-simulator mode, the flies faced a more interesting challenge: they had to stabilise a rotating arena by modulating their yaw torque (tendency to turn left or right), and they also had to stay within a safe zone to avoid the heat. In other experiments (Brembs, 2003), flies were able to adjust their thrust to an arbitrary level that stopped their arena from rotating. I would argue that the ability of the flies to narrow their yaw torque range or their thrust to a specified range, in order to avoid heat, fulfils the requirements for fine-tuning as defined in part B: namely, stabilising a basic motor pattern at a particular value or confining it within a narrow range of values, in order to achieve a goal that the individual had learned to associate with the action. We can conclude that Drosophila is capable of true operant behaviour.

Other requirements for conditioning: a current goal, sensory inputs and associations

As well as having innate goals, the fly also has a current goal: to avoid being incinerated by the heat beam.

Sensory input can also play a key role in operant conditioning: it informs the animal whether it has attained its goal, and if not, whether it is getting closer to achieving it. A fly undergoing operant conditioning in sw-mode or fs-mode needs to continually monitor its sensory input (the background pattern on the cylindrical arena), so as to minimise its deviation from its goal (Wolf and Heisenberg, 1991; Brembs, 1996, p. 3).

By contrast, a fly undergoing yaw torque learning has no sensory inputs that tell it if it is getting closer to its goal: it is flying blind, as it were. The only sensory input it has is a "punishment" (the heat beam is turned on) if it turns right.

Finally, an animal undergoing conditioning needs to be able to form associations. In this case, the fly needs to be able to either associate motor commands directly with their consequences (yaw torque learning) or associate them indirectly, by forming direct associations between motor commands and sensory inputs (changing patterns on the fly's background arena), and between these sensory inputs and the consequences of motor commands.

Internal representations and minimal maps

We still do not have a list of sufficient conditions for intentional agency. Even if a fly can learn how to fine-tune its motor patterns in order to obtain a goal, why should its behaviour warrant an agent-centred description rather than a mind-neutral goal-centred one?

What I am proposing here is that the representational notion of a minimal map is what warrants a mentalistic account of operant conditioning.

The map metaphor for belief is by no means new. Its clearest articulation can be found in Ramsey (1990):

A belief of the primary sort is a map of neighbouring space by which we steer. It remains such a map however much we complicate it or fill in details (1990, p. 146).

Before I explain what I mean by a minimal map, I would like to make it clear that I am not claiming that all or even most beliefs are map-like representations. What I am proposing here is that Ramsey's account provides us with a useful way of understanding the beliefs which underlie operant behaviour in animals, as well as three other kinds of agency (spatial learning of visual landmarks, refined tool making and social learning, which I discuss below).

What do I mean by a minimal map? Stripped to its bare bones, a map must be able to do three things: it must be capable of showing you where you are now, where you want to go, and how to get to where you want to go. The phrase "where" need not be taken literally as referring to a place, but it has to refer to a specific state - for instance, a specific color, temperature, size, angle, speed or intensity - or at least a relatively narrow range of values.

Definition - "minimal map"
A minimal map is a representation which is capable of showing:

(i) an individual's current state,
(ii) the individual's goal and
(iii) a suitable means for getting to the goal.

A minimal map need not be spatial, but it must represent specific states.

A minimal map can also be described as an action schema. The term "action schema" is used rather loosely in the literature, but Perner (2003, p. 223, italics mine) offers a good working definition: "[action] schemata (motor representations) not only represent the external conditions but the goal of the action and the bodily movements to achieve that goal."

A minimal map is more than a mere association. An association, by itself, does not qualify as a minimal map, because it does not include information about the individual's current state.

On the other hand, a minimal map is not as sophisticated as a cognitive map, and I am certainly not proposing that Drosophila has a "cognitive map" representing its surroundings. In a cognitive map, "the geometrical relationships between defined points in space are preserved" (Giurfa and Capaldi, 1999, p. 237). As we shall see, the existence of such maps even in honeybees remains controversial. What I am suggesting is something more modest.

First, Drosophila can form internal representations of its own bodily movements, for each of its four degrees of freedom, within its brain and nervous system.

Second, it can either (a) directly associate these bodily movements with good or bad consequences, or (b) associate its bodily movements with sensory stimuli (used for steering), which are in turn associated with good or bad consequences (making the association of movements with consequences indirect). In case (a), Drosophila uses an internal motor map; in case (b), it uses an internal sensorimotor map. In neither case need we suppose that it has a spatial grid map.

A minimal map, or action schema, is what allows the fly to fine-tune the motor program it has selected. In other words, the existence of a minimal map (i.e. a map-like representation of an animal's current state, goal and pathway to its goal) is what differentiates operant from merely instrumental conditioning.

For an internal motor map, the current state is simply the present value of the motor plan the fly has selected (e.g. the fly's present yaw torque), the goal is the value of the motor plan that enables it to escape the heat (e.g. the safe range of yaw torque values), while the means for getting there is the appropriate movement for bringing the current state closer to the goal.

For an internal sensorimotor map, the current state is the present value of its motor plan, coupled with the present value of the sensory stimulus (color or pattern) that the fly is using to navigate; the goal is the color or pattern that is associated with "no-heat" (e.g. an inverted T); and the means for getting there is the manner in which it has to fly to keep the "no-heat" color or pattern in front of it.

I maintain that Drosophila employs an internal sensorimotor map when it is undergoing fs-mode learning. I suggest that Drosophila might use an internal motor map when it is undergoing pure operant conditioning (yaw torque learning). (I am more tentative about the second proposal, because as we have seen, in the case of yaw torque learning, Drosophila may not be engaging in fine-tuning at all, and hence may not need a map to steer by.) Drosophila may make use of both kinds of maps while flying in sw-mode, as it undergoes parallel operant conditioning.

An internal motor map, if it existed, would be the simplest kind of minimal map, but if (as I suggest) what Brembs (2000) calls "pure operant conditioning" (yaw torque learning) turns out to be instrumental learning, then we can explain it without positing a map at all: the fly may be simply forming an association between a kind of movement (turning right) and heat (Heisenberg, personal email, 6 October 2003).

Wolf's and Heisenberg's model of agency compared and contrasted with mine

My description of Drosophila's internal map draws heavily upon the conceptual framework for operant conditioning developed by Wolf and Heisenberg (1991, pp. 699-705). In their model, the fly a compares its motor output with its sensory stimuli, which indicate how far it is from its goal. When a temporal coincidence is found, a motor program is selected to modify the sensory input so the animal can move towards its goal. If the animal consistently controls a sensory stimulus by selecting the same motor program, then we can speak of operant conditioning.

The main differences between this model and my own proposal are that:

(i) I envisage a two-stage process, whereby Drosophila first selects a motor program (action selection), and subsequently refines it through a fine-tuning process, thereby exercising control over its bodily movements;

(ii) I hypothesise that Drosophila uses a minimal map of some sort (probably an internal sensorimotor map) to accomplish this; and

(iii) because Drosophila does not use this map when undergoing instrumental conditioning, I predict a clearcut distinction (which should be detectable on a neurological level) between operant conditioning and merely instrumental conditioning.

In the fine-tuning process I describe, there is a continual inter-play between Drosophila's "feed-back" and "feed-forward" mechanisms. Drosophila at the torque meter can adjust its yaw torque, lift/thrust, abdominal position or leg posture. I propose that the fly has an internal motor map or sensorimotor map corresponding to each of its four degrees of freedom, and that once it has selected a motor program, it can use the relevant map to steer itself away from the heat.

How might the fly form an internal representation of the present value of its motor plan? Wolf and Heisenberg (1991, p. 699) suggest that the fly maintains an "efference copy" of its motor program, in which "the nervous system informs itself about its own activity and about its own motor production" (Legrand, 2001). The concept of an efference copy was first mooted in 1950, when it was suggested that "motor commands must leave an image of themselves (efference copy) somewhere in the central nervous system" (Merfeld, 2001, p. 189). However, efference copy cannot simply be compared with the sensory afference elicited by the animal's movement, since one is a motor command and the other is a sensory cue (Merfeld, 2001, p. 189). Merfeld's model resembles one developed by Gray (1995):

Analogous to Gray's description of his model, ... [my] model (1) takes in sensory information; (2) interprets this information based on motor actions; (3) makes use of learned correlations between sensory stimuli; (4) makes use of learned correlations between motor actions and sensory stimuli; (5) from these sources predicts the expected state of the world; and (6) compares the predicted sensory signals with the actual sensory signals (Merfeld, 2001, p. 190).

Merfeld's model is similar to Gray's, except that in the event of a mismatch between the expected and actual sensory signals, the mismatch is used as an error signal to guide the estimated state back toward the actual state. (See section 1.2 of the Appendix to part C of chapter 2 for further details.)

Recently, Barbara Webb (2004, Neural mechanisms for prediction: do insects have forward models? Trends in Neurosciences 27:278-282) has reviewed proposals that invertebrates such as insects make use of "forward models", as vertebrates do:

The essential idea [of forward models] is that an important function implemented by nervous systems is prediction of the sensory consequences of action... [M]any of the purposes forward models are thought to serve have analogues in insect behaviour; and the concept is closely connected to those of 'efference copy' and 'corollary discharge' (2004, p. 278).
Webb discusses a proposal that insects may make use of some sort of "look-up table" in which motor commands are paired up with their predicted sensory consequences. The table would not need to be a complete one in order to be adequate for the insect's purposes. The contents of this table (predicted sensory consequences of actions) would be acquired by learning on the insect's part.

Actions and their consequences: how associations can represent goals and pathways on the motor map

The internal representation of the fly's motor commands has to be coupled with the ability to form and remember associations between different possible actions (yaw torque movements) and their consequences. Heisenberg explains why these associations matter:

A representation of a motor program in the brain makes little sense if it does not contain, in some sense, the possible consequences of this motor program (personal email, 15 October 2003).

On the hypothesis which I am defending here, the internal motor map (used in sw-mode and possibly in yaw torque learning) directly associates different yaw torque values with heat and comfort. The fly's goal (escape from the heat) could be represented on this map as a stored motor memory of the motor pattern (flying clockwise) which allows the fly to stay out of the heat, and the pathway as a stored motor memory (based on the fly's previous exploratory behaviour) of the movement (flying into the left domain) which allows the fly to get out of the heat.

The internal sensorimotor map, which the fly uses in fs-mode and sw-mode, indirectly associates different yaw torque values with good and bad consequences. For instance, different yaw torque values may be associated with the upright T-pattern on the rotating arena (the conditioned stimulus) and the inverted T-pattern, which are associated with heat (the unconditioned stimulus) and comfort respectively. On this map, the fly's goal could be encoded as a stored memory of a sensory stimulus (e.g. the inverted T) that the fly associates with the absence of heat, while the pathway would be the stored memory of a sequence of sensory stimuli which allows the animal to steer itself towards its goal.

The underlying assumption here, that the fly can form associations between things as diverse as motor patterns, shapes and heat, is supported by the proposal of Heisenberg, Wolf and Brembs (2001, p. 6) that Drosophila possesses a multimodal memory, in which "colors and patterns are stored and combined with the good and bad of temperature values, noxious odors, or exafferent motion".

Correlation mechanism

The animal must also possess a correlation mechanism, allowing it to find a temporal coincidence between its motor behaviour and the attainment of its goal (avoiding the heat). Once it finds a temporal correlation between its behaviour and its proximity to the goal, "the respective motor program is used to modify the sensory input in the direction toward the goal" (Wolf and Heisenberg, 1991, p. 699; Brembs, 1996, p. 3). In the case of the fly undergoing flight simulator training, finding a correlation would allow it to change its field of vision, so that it can keep the inverted T in its line of sight.

Self-correction

One of the conditions that we identified for self-correcting behaviour in part B was that the animal had to be able to rectify motor patterns which deviate outside the desired range. Legrand (2001) has proposed that the "efference copy" of an animal's motor program not only gives it a sense of trying to do something, but also indicates the necessity of a correction.

However, as Beisecker (1999) points out, self-correction involves modifying one's beliefs as well as one's actions, so that one can avoid making the same mistake in future. This means that animals with a capacity for self-correction have to be capable of updating their internal representations. One way the animal could do this is to continually update its multimodal associative memory as new information comes to light and as circumstances change. For example, in the fly's case, it needs to update its memory if the inverted T design on its background arena comes to be associated with heat rather than the absence of it.

How do flies follow their minimal maps?

I wish to make it clear that I do not regard the fly's map as something separate from the fly, which it can consult: rather, it is instantiated within the fly's body. To picture of the fly consciously consulting its internal map when it adjusts its angle of flight at the torque meter would be grossly anthropomorphic.

Nor is the map merely some internal program which tells it how to navigate. There would be no room for agency or control in such a picture.

Rather, the map consists of a set of associations between motor patterns, sensory inputs and consequences (heat or no heat) which are formed in the fly's brain. The fly uses these associations to steer its way out of trouble. Although we can speak of the fly as updating its internal map, we should think of the fly as observing its environment, rather than the map itself.

Although we can say that a fly controls its movements by following its internal map, this should not be taken to mean that map-following is a lower level act. It simply means that the fly uses a map when exercising control.

Why an agent-centred intentional stance is required to explain operant conditioning

We can now explain why an agent-centred mentalistic account of operant conditioning is to be preferred to a goal-centred intentional stance. A goal-centred stance has two components: an animal's goal and the information it has which helps it attain its goal. The animal's goal-seeking behaviour is triggered by the information it receives from its environment.

By contrast, our account of operant conditioning includes not only information (about the animal's present state and end state) and a goal (or end state), but also an internal representation of the means or pathway by which the animal can steer itself from where it is now towards its goal - that is, the sequence of movements and/or sensory stimuli that guides it to its goal. In operant conditioning, I hypothesise that the animal uses its memory of this "pathway" to continually fine-tune its motor patterns and correct any "overshooting".

"But why should a fine-tuned movement be called an action, and not a reaction?" - The reason is that fine-tuned movement is self-generated: it originates from within the animal's nervous system, instead of being triggered from without. The fly's efference copy enables it to monitor its own bodily movements whereby the animal's nervous system sends out impulses to a bodily organ (Legrand, 2001), and it receives sensory feedback (via the visual display and the heat beam) when it varies its bodily movements. The animal also takes the initiative when it subsequently compares the fine motor output from the nervous system with its sensory input, until it finds a positive correlation (Wolf and Heisenberg, 1991, p. 699; Brembs, 1996, p. 3). Talk of action is appropriate here because of the two-way interplay between the agent adjusting its motor output and the new sensory information it receives from its environment. Wolf and Heisenberg (1991, quoted in Brembs, 1996, p. 3, italics mine) define operant behaviour as "the active choice of one out of several output channels in order to minimize the deviations of the current situation from a desired situation", and operant conditioning as a more permanent behavioural change arising from "consistent control of a sensory stimulus."

These points should go some of the way towards answering the objections of Varner (1998) and Carruthers (2004), who regard association as too mechanical a process to indicates the presence of mental states.

"But why should we call the fly's internal representation a belief?" - There are three powerful reasons for doing so. First, the fly's internal representation tracks the truth in a robust sense: it not only mirrors a state of affairs in the real world, but changes whenever the situation it represents varies. The fly's internal representation changes if it has to suddenly learn a new pathway to attain its goal. Indeed, the fly's self-correction can be regarded as a kind of rule-following activity. Heisenberg, Wolf and Brembs (2001, p. 3) contend that operant behaviour can be explained by the following rule: "Continue a behaviour that reduces, and abandon a behaviour that increases, the deviation from the desired state."

Second, the way the internal representation functions in explaining the fly's behaviour is similar in important respects to the behavioural role played by human beliefs. If Ramsey's account of belief is correct, then the primary function of our beliefs is to serve as maps whereby we steer ourselves. The fly's internal representations fulfil this primary function admirably well, enabling it to benefit thereby. We have two options: either ascribe beliefs to flies, or ditch the Ramseyan account of belief in favour of a new one, and explain why it is superior to Ramsey's model.

Third, I contend that any means-end representation by an animal which is formed by a process under the control of the animal deserves to be called a belief. The process whereby an animal controls its behaviour was discussed above.

Perhaps the main reason for the residual reluctance by some philosophers to ascribe beliefs to insects is the commonly held notion that beliefs have to be conscious. I shall defer my discussion of consciousness until chapter four. For the time being, I shall confine myself to two comments: (i) to stipulate that beliefs have to be conscious obscures the concept of "belief", as the notion of "consciousness" is far less tractable than that of "belief"; (ii) even granting that human beliefs are typically conscious, to at least some extent, it does not follow that beliefs in all other species of animals have to be conscious too.

A model of operant agency

We can now formulate a set of sufficient conditions for operant conditioning and what I call operant agency:

Definition - "operant conditioning"
An animal can be described as undergoing operant conditioning if the following features can be identified:

(i) innate preferences or drives;

(ii) innate motor programs, which are stored in the brain, and generate the suite of the animal's motor output;

(iii) a tendency on the animal's part to engage in exploratory behaviour;

(iv) an action selection mechanism, which allows the animal to make a selection from its suite of possible motor response patterns and pick the one that is the most appropriate to its current circumstances;

(v) fine-tuning behaviour: efferent motor commands which are capable of stabilising a motor pattern at a particular value or within a narrow range of values, in order to achieve a goal;

(vi) a current goal: the attainment of a "reward" or the avoidance of a "punishment";

(vii) sensory inputs that inform the animal whether it has attained its goal, and if not, whether it is getting closer to achieving it;

(viii) direct or indirect associations between (a) different motor commands; (b) sensory inputs (if applicable); and (c) consequences of motor commands, which are stored in the animal's memory and updated when circumstances change;

(ix) an internal representation (minimal map) which includes the following features:

(a) the animal's current motor output (represented as its efference copy);

(b) the animal's current goal (represented as a stored memory of the motor pattern or sensory stimulus that the animal associates with the attainment of the goal); and

(c) the animal's pathway to its current goal (represented as a stored memory of the sequence of motor movements or sensory stimuli which enable the animal to steer itself towards its goal);

(x) the ability to store and compare internal representations of its current motor output (i.e. its efferent copy, which represents its current "position" on its internal map) and its afferent sensory inputs;

(xi) a correlation mechanism, allowing it to find a temporal coincidence between its motor behaviour and the attainment of its goal;

(xii) self-correction, that is:

(a) an ability to rectify any deviations in motor output from the range which is appropriate for attaining the goal;

(b) abandonment of behaviour that increases, and continuation of behaviour that reduces, the animal's "distance" (or deviation) from its current goal; and

(c) an ability to form new associations and alter its internal representations (i.e. update its minimal map) in line with variations in surrounding circumstances that are relevant to the animal's attainment of its goal.

If the above conditions are all met, then we can legitimately speak of the animal as an intentional agent which believes that it will get what it wants, by doing what its internal map tells it to do.

Definition - "operant agency"
Operant conditioning is a form of learning which presupposes an agent-centred intentional stance.

Animals that are capable of undergoing operant conditioning can thus be said to exhibit a form of agency called operant agency.

(Note: this definition can also be found in section 1.3 of the Appendix to part C of chapter 2.)

Carruthers' cognitive architecture for beliefs

Carruthers (2004) argues that the presence of a mind is determined by the animal's cognitive architecture - in particular, whether it has beliefs and desires. Carruthers is prepared to regard any insect that can find its way about on a mental map as acting on a belief (2004, see quote in part 2.C.2 below). If the account of operant agency I defend here is correct, then this is precisely what happens here. Below, I discuss Carruthers' proposed architecture and caution against his assumption that there is a single core cognitive architecture underlying all kinds of minds.

Varner's arguments against inferring mental states from conditioning

Varner (1998) maintains that animals that are genuinely learning should be able to form primitive hypotheses about changes in their environment.

I would argue that Varner has set the bar too high here. Forming a hypothesis is a more sophisticated cognitive task than forming a belief, as: (i) it demands a certain degree of creativity, insofar as it seeks to explain the facts; (ii) for any hypothesis, there are alternatives that are also consistent with the facts.

I would also like to point out that some animals that are capable of operant agency engage in a very sophisticated form of trial-and-error learning which is strongly reminiscent of hypothesis formation. In section A.5 of the Appendix to part C of chapter 2, I describe a particularly impressive case: the behaviour of the jumping spider Portia (Wilcox, 2002), whose flexible trial-and-error learning processes, apparent planning ahead, and persistent maintenance of its own cognitive map place it among the foremost of invertebrate groups in the cognitive arena.

Varner (1998) proposes (following Bitterman, 1965) that tests of reversal learning offer a good way to test animals' abilities to generate hypotheses. I discuss this idea in section 2 of the Appendix to part D. I will simply mention here that at least one species of insect - the honeybee - passes Varner's test in flying colors.

The role of belief and desire in operant conditioning

I have argued above that the existence of map-like representations which underlie the two-way interaction between the animal's self-generated motor output and its sensory inputs during operant conditioning, require us to adopt an agent-centred intentional stance. By definition, this presupposes the occurrence of beliefs and desires. In the case of operant conditioning, the content of the agent's beliefs is that by following the pathway, it will attain its goal. The goal is the object of its desire.

In the operant conditioning experiments performed on Drosophila, we can say that the fly desires to attain its goal. The content of the fly's belief is that it will attain its goal by adjusting its motor output and/or steering towards the sensory stimulus it associates with the goal. For instance, the fly desires to avoid the heat, and believes that by staying in a certain zone, it can do so.

The scientific advantage of a mentalistic account of operant conditioning

If we think of the animal as an intentional agent that continually probes its environment, modifies its beliefs and fine-tunes its movements in order to attain what it wants, we can formulate and answer questions such as:

Criticising the belief-desire account of agency

Bittner (2001) has recently argued that neither belief nor desire can explain why we act. A belief may convince me that something is true, but then how can it also steer me into action? A desire can set a goal for me, but this by itself cannot move me to take action. (And if it did, surely it would also steer me.) Even the combination of belief and desire does not constitute a reason for action. Bittner does not deny that we act for reasons, which he envisages as historical explanations, but he denies that internal states can serve as reasons for action.

If my account is correct, the notion of an internal map answers Bittner's argument that belief cannot convince me and steer me at the same time. As I fine-tune my bodily movements in pursuit of my object, the sensory feedback I receive from my probing actions shapes my beliefs (strengthening my conviction that I am on the right track) and at the same time steers me towards my object.

A striking feature of my account is that it makes control prior to the acquisition of belief: the agent manages to control its own body movements, and in so doing, acquires the belief that moving in a particular way will get it what it wants.

Nevertheless, Bittner does have a valid point: the impulse to act cannot come from belief. In the account of agency proposed above, the existence of innate goals, basic motor patterns, exploratory behaviour and an action selection mechanism - all of which can be explained in terms of a goal-centred intentional stance - were simply assumed. This suggests that operant agency is built upon a scaffolding of innate preferences, behaviours and motor patterns. These are what initially moves us towards our object.

Bittner's argument against the efficacy of desire fails to distinguish between desire for the end (which is typically an innate drive, and may be automatically triggered whenever the end is sensed) and desire for the means to it (which presupposes the existence of certain beliefs about how to achieve the end). The former not only includes the goal (or end), but also moves the animal, through innate drives. In a similar vein, Aristotle characterised locomotion as "movement started by the object of desire" (De Anima 3.10, 433a16). However, desire of the the latter kind presupposes the occurrence of certain beliefs in the animal. An object X, when sensed, may give rise to X-seeking behaviour in an organism with a drive to pursue X. This account does not exclude desire, for there is no reason why an innate preference for X cannot also be a desire for X, if it is accompanied by an internal map. Desire, then, may move an animal. However, the existence of an internal map can only be recognised when an animal has to fine-tune its motor patterns to attain its goal (X) - in other words, when the attainment of the goal is not straightforward.

Is my account falsifiable?

The proposal that fruit flies are capable of undergoing operant conditioning would be refuted if a simpler mechanism (e.g. instrumental conditioning) were shown to be able to account for their observed behaviour in flight simulator experiments.

Likewise, the theoretical basis of my account of operant conditioning would be severely weakened by the discovery that fruit flies rely on a single-stage process (not a two-stage process, as I suggested above) to figure out how to avoid the heat-beam in "fs-mode", or by evidence that there is no hard-and-fast distinction, at the neurological level, between instrumental and operant conditioning in animals undergoing conditioning.


2.C.2 Spatial learning, agency and belief in insects


The honeybee is capable of some impressive feats of spatial navigation.
Picture courtesy of Professor Angela Perez Mejia, of Brandeis University, Massachusetts.

The current state of research into spatial learning in insects remains fluid. What is not disputed is that some insects (especially social insects, such as the ants, bees and wasps) employ a highly sophisticated system of navigation, and that they employ at least two mechanisms to find their way back to their nests: path integration (also known as dead reckoning) and memories of visual landmarks. A third mechanism - global (or allocentric) maps - has been proposed for honey bees, but its status remains controversial.

In section 2.2 of the Appendix to Part C of chapter 2, I discuss the latest findings regarding insect navigation, and argue that an insect's use of visual landmarks to steer by does indeed require minimal maps, whereas path integration does not. I argue that the continual self-monitoring behaviour of navigating insects suggests that they are indeed in control of their bodily movements, and that insects construct their own visual maps using a flexible learning process.

A model of intentional agency in navigating insects

I contend that insects that navigate using a minimal map are exhibiting a second form of intentional agency, which I call navigational agency. My reason for regarding this kind of agency as intentional is that we can stipulate conditions for navigational agency, which closely parallel those for operant agency described above. Since it would be tedious to repeat them here, I shall briefly highlight the differences. (The complete list of conditions for navigational agency can be found in section 2.1 of the Appendix to Part C to chapter 2.)

Briefly: I have added an extra condition relating to sub-goals or short-term goals, such as landmarks, which the animal uses to steer itself towards its current long-term goal (e.g. a food source). The associations formed by a navigating agent are (a) between short-term goals (visual landmarks) and local vectors; (b) between the animal's short term goals and long term goals (food sites or the nest). The animal's minimal map will include not only its current goal (represented as a stored memory of a visual stimulus that the animal associates with the attainment of the goal) but also its sub-goals (represented as stored memories of visual landmarks), while the animal's pathway is represented as a stored memory of the sequence of visual landmarks which enable the animal to steer itself towards its goal, as well as a sequence of vectors that help the animal to steer itself from one landmark to the next. Finally, in navigation, a temporal correlation mechanism, allowing it to find a temporal coincidence between its motor behaviour and the attainment of its goal, is not required.

What do navigational agents believe and desire?

I argued in section 1 that any insect which uses a minimal map to steer itself towards its goal can be said to have beliefs. An insect that uses a minimal map to navigate can thus be said to believe that it will attain its (short or long term) goal by adjusting its motor output and/or steering towards the sensory stimulus it associates with the goal.

Carruthers (2004) offers an additional pragmatic argument in favour of ascribing beliefs and desires to animals with mental maps:

If the animal can put together a variety of goals with the representations on a mental map, say, and act accordingly, then why shouldn't we say that the animal believes as it does because it wants something and believes that the desired thing can be found at a certain represented location on the map?

While I would endorse much of what Carruthers has to say, I should point out that his belief-desire architecture is somewhat different from mine. He proposes that perceptual states inform belief states, which interact with desire states, to select from an array of action schemata, which determine the form of the motor behaviour. I propose that the initial selection from the array of action schemata is not mediated by beliefs, which only emerge when the animal fine-tunes its selected action schema in an effort to obtain what it wants. Moreover, I propose a two-way interaction between motor output and sensory input. Finally, the mental maps discussed by Carruthers are spatial ones, whereas I also allow for motor or sensorimotor maps.


2.C.3 A model of tool agency in Cephalopods


Octopus arms are highly manoeuvrable. Courtesy BBC.

My reason for regarding what I call "tool agency" as a form of intentional agency is that the conditions for tool agency (see section 3.1 of the Appendix to Part C of chapter 2) are almost identical to those for operant agency, except that there is no need for a temporal correlation mechanism, and the activity required to attain one's goal is performed with an external object (a tool), and not just one's body. (In the Appendix, I also comment on Beck's (1980) commonly cited definition of a tool.)

Fine tuning in octopuses

Most cephalopods (the class of molluscs to which octopuses belong) have very flexible limbs, with unlimited degrees of freedom. Scientists have recently discovered that octopuses control the movement of their limbs by using a decentralised system, where most of the fine-tuning occurs in the limb itself (Noble, 2001). I assess the evidence for tool agency in octopuses in section 3.2 of the Appendix to Part C of chapter 2.


2.C.4 A model of social agency in fish

Sufficient conditions for agency in a social context

Most of the sufficient conditions for operant agency and navigational agency can also be found in a social context. The big difference is that in a social context, an animal fine-tunes its movements not simply by comparing its motor output with its sensory inputs, but by following the example of another knowledgeable individual (its role model). Hence social agency requires several new conditions to be satisfied:

The relevant internal representation (minimal map) in social learning is one where copying the activities of a role model leads to the attainment of one's own goal. The complete list of conditions for social agency can be found in section 4.1 of the Appendix to Part C of chapter 2.

It should be stressed that social agency, as defined in my model, does not imply that the animals have a "theory of mind". The animals only need to be smart enough to follow the example of other individuals, as a means to the attainment of their own ends such as food. In order to do this, they need to be able to recognise the individuals they select as "role models" and remember how reliable these role models in the past. However, there is no need for them to attribute beliefs and desires to the individuals they are modelling their behaviour on.

Case study: social learning in fishes

At least some species of fish appear to satisfy each of the features of my model of social agency, although it is still an open question as to whether any one species satisfies all of them (Bshary, Wickler and Fricke, 2002) These features are described in section 4.2 of the Appendix to part C of chapter 2, along with evidence for navigational agency and tool agency in fish. (I discuss the case of octopuses in section 4.3 of the Appendix to part C of chapter 2.)


Which animals have minds?

The upshot of this investigation is that simple minds, with beliefs and desires, and a limited capacity for intentional agency, are quite widespread in the animal kingdom. At least some insects and cephalopods instantiate at least one form of intentional agency; and some fish may exhibit all four forms.