A non-technical summary of my e-book

Back to main page

This is a pretty broad-ranging e-book, which attempts to answer a lot of big questions, in particular:

(1) What does it mean for something to be alive?

Let's get one thing straight immediately. This is not a question about the "meaning of life", so the answer isn't 42. The question I'm asking is: what's the difference between the things which we consider to be alive (like bacteria, plants and animals) and everything alse?

The crisis relating to current definitions of "life"

Defining what it means to be alive isn't as easy as you might think. There are awkward exceptions to any simple definitions you might care to make. You might define living things as things that grow, but then so do crystals. Or you might say that living things reproduce, but so do flames and computer viruses.

Until recently, philosophers weren't too worried about these exceptions, and most of them still aren't. After all, most everyday definitions are a bit blurry around the edges. That's normal. Then, in 2002, something happened. A man named Steve Wolfram published a book, and opened a can of worms. When I read his book, I realised I was sitting on a bombshell.

Wolfram is a kind of polymath genius (he got his Ph.D. in theoretical physics when he was only 20), and in his book he said he thought the whole distinction between living and non-living was meaningless mumbo-jumbo. Practically everything, he said, is in some way alive, which is why he calls himself a neo-animist. What made Wolfram a neo-animist was his discovery that he could build things called abstract computational systems, which mimicked all the behaviour we normally associate with living things - growth, self-organisation, reproduction, and so on. The interesting thing about these systems is that they were governed by a set of very simple rules.

It gets worse. Wolfram made an impressive mathematical case in his book, to the effect that practically everything in nature is just as complex as any living living thing - including us. The gist of his argument was that you could take almost any object (be it a thermostat, a container of gases, a glass of water or even an organic molecule), and rig it up somehow to perform a mathematical calculation of some sort. What's more, you could get these inanimate objects to perform exactly the same kinds of calculations as things like Pentium-4 computers, which we would commonly regard as more "complex" (and hence better at computing) - the only difference being that you would have to wait a lot longer to get the answer. In other words, most simple systems can compute answers to the same range of problems as complex systems, if you give them enough time. But in that case, you might as well say that every system is equally complex.

Now for the coup de grace. Wolfram argues that since complexity is commonly taken by scientists to be one of the distinguishing hallmarks of living things (if not THE distinguishing hallmark), then the traditional distinction between "living" and "non-living" is no longer tenable. Practically everything is alive in some way.

Why the definition of "life" matters

At this point, some of you may be asking, "Does it really matter what we call 'alive'? Isn't it just a matter of how you want to define the term?"

Actually, the definition of "life" a very practical issue. Let me illustrate. If you're like most people, you would happily buy a plastic tree from a store for your Christmas decorations, but would blanch at the thought of chopping down a real tree (especially a nice big one) and using it to decorate your home at Christmas time. You probably consider chopping down a tree purely for decorative purposes to be a wasteful practice - a senseless destruction of something living and beautiful. "Reverence for life" is a very influential ideal in many cultures. While most people believe that a good reason is required to justify the destruction of a living thing, inanimate objects get no such respect.

But if Wolfram is right, then the ethical importance we attach to a special class of things which we consider to be "alive" (like trees, for instance) is utterly misplaced. Either everything matters ethically (because everything is in some way alive), or nothing does. Both conclusions seem ethically unpalatable.

Do only sentient beings have interests?

Not everyone believes, however, that living things are important per se. Some people define the scope of their ethical concerns in a completely different way. There are some philosophers, such as Peter Singer, who say we should only care about creatures that have feelings and can take an interest in things - that is, sentient animals like cats, dogs, elephants and humans. According to these philosophers, behaving ethically basically boils down to respecting the interests of other sentient beings - i.e. letting them do whatever they want to do.

The problem I have with this view is that it doesn't fit our moral intuitions very well. Here's a case you might like to think about (I got it from a philsopher named Gary Varner, whose book, "In Nature's Interests?" had a powerful influence on my thinking). Suppose you have a pet cat named Nanci, who likes to go outside and frolic every day. Someone tells you that there's an epidemic of feline AIDS in your area. If the "sentientist" school of thought is right, we should respect Nanci's wishes to go outside if she chooses to, but every pet owner I know would keep Nanci indoors until the feline AIDS epidemic has subsided. Varner argues that we can only justify this widely shared moral intuition about keeping Nanci indoors by broadening our definition of her interests, to include not only the set of things she happens to want, but also the things that are beneficial to her biological well-being as well.

"That makes sense", I hear you say, "but what's that got to do with the definition of life?" Plenty, according to Varner. For if Nanci's interests include things that promote her biological well-being, then why shouldn't we say the same about plants, which also have a biological well-being of their own? Where it gets tricky is: how do we draw the line between things that have a biological well-being and things that don't? What does it mean to be "alive"? And why aren't crystals, chairs and computers alive?

Varner has his own interesting answer to that question: anything that evolves by natural selection is alive. Trouble is, we now know that abstract computational systems can evolve too - and they're nothing like living things. So we're back at square one.

Back to Aristotle

I decided to review the definitions of "life" that had been proposed in the literature, to see if I could find one that made sense - both scientifically and ethically. After a LOT of digging, I concluded that Wolfram was right about one thing: the enterprise of defining "life" by its behavioural or functional characteristics (like growth or reproduction) was utterly mis-guided. I concluded that any adequate definition of life had to include the idea of living things having a "good of their own". That's why ants, bacteria and cypress trees are alive, while crystals, chairs and computers are not.

This isn't a new idea - it goes back 2,300 years, to Aristotle. But ever since the 1600s, scientists and philosophers have generally rejected Aristotle's teleological conception of life. The reasons are complex, but basically I argue that it boils down to a philosophical prejudice on their part: a "mechanistic" world-view in which objects are envisaged as interacting like billiard balls. All of this is explained in much more detail in a recent philosophy thesis by a guy named Richard Cameron (2000), who is now an Assistant Professor of Philosophy at DePauw University. Cameron defends a neo-Aristotelian account of life against contemporary scientific and philosophical objections.

In my e-book, I argue that while Aristotle's definition makes pretty good sense, it's scientifically incomplete, as it doesn't tell scientists how to determine whether something has a good of its own. This gets back to our earlier question of how we are supposed to draw the line between things that have a biological well-being and things that don't.

A new definition?

After scouring the scientific literature, I've proposed some criteria (not my own, but I've cobbled them together, so to speak), to help scientists decide identify things that have a "good of their own". In a nutshell, I suggest that anything with (i) parts that are regulated by a master program, (ii) an internal organisation which is hierarchical in structure, with lots of layers (a bit like a Russian doll, with little parts inside bigger parts inside even bigger parts), and (iii) embedded functionality (where the smaller parts are completely dedicated to supporting the larger parts that they comprise), is alive.

Of course, the tricky part is: showing that anything that meets these criteria also meets Aristotle's definition, and vice versa.

Later, I discuss the vexed question of whether a computer can be alive - sorry, but you'll have to read my e-book to see what I have to say on that one! I also argue that any individual that is alive can be said to have a nature of its own, as there are certain kinds of things that can be said to good for it (e.g. food, sunlight or sex), simply because it is what it is. These things can be said to be "in its interests" even if it is completely unconscious, like a bacterium.

Modern biologists don't like to think of organisms as having "natures" because that sounds too static, and we all know that living things evolve. However, I argue that using the term "nature" shouldn't be a problem, because species evolve very slowly, over millions of years - which means that they're static for all intents and purposes. More precisely, they're static enough for us to point to a living thing and say: this organism has the same set of built-in goals (or telos, to use an old Greek term) as some of its ancestors did. I also make the surprising claim that Darwinian evolution is compatible with the best insights of Aristotle's philosophy.

Finally, I argue that it makes more sense to define ethical behaviour in terms of promoting an organism's biological well-being (or overall health, to put it another way) than in terms of satisfying its wants (as "sentientists" like Peter Singer would claim). Wants may be good, bad or indifferent. On the other hand, it is hard to see what can be wrong with promoting an organism's health. If that isn't a good thing to do, then what is?

(2) Which creatures have minds, and what kinds of minds do they have?

My concern in chapter two is to describe the simplest, most basic kind of mind a living creature could possibly have, and then to identify which creatures qualify as having what I call a "minimal mind".

A minimal mind might not necessarily be a conscious one with subjective feelings, like the experience we have when we see the colour red. After all, we have minds, yet many of our beliefs and desires lie below the threshold of consciousness. Some philosophers have suggested that there may be simple minds, whose beliefs and desires are entirely unconscious. Other philosophers say the very idea of a mind that is never conscious is utterly ridiculous.

Back in 1997, a philosopher named Daniel Dennett wrote a best-selling book about animal minds called "Kinds of Minds". I use some of the key ideas developed in Dennett's book. However, I also argue that many of the terms bandied around by philosophers and scientists - words like "sense", "memory", "flexible behaviour" and "learn", to name just a few - are not uniformly defined in the literature, and I decide to adopt a rigorous empirical approach. What's more, I examine the behaviour of all kinds of organisms - even viruses. Basically, the approach I take is a conservative one: don't interpret an organism's behaviour as a manifestation of underlying mental states unless doing so enables you to make better scientific predictions about its behaviour, and understand it better.

After sifting through seven major aspects of animal behaviour, I suggest that any animal that can: (i) sense objects in its environment, (ii) remember new skills that it acquires, (iii) flexibly update its own internal programs, which regulate its behaviour, (iv) learn to associate actions with their consequences, (v) control its own bodily movements by fine-tuning them, (vi) internally represent its current state, its goal and the pathway it needs to follow to get to its goal, and finally (vii) correct any mistakes it makes in its movements towards its goal, as well as any factual mis-representations of its environment, in the light of new information, is eligible for being considered as an intentional agent with a mind of sorts, even if the animal doesn't have an "inner life" or subjective feelings, such as aches and pains, or the sensation of what it feels like to see the color red.

Later on, in part C of chapter two, I list about a dozen detailed conditions that an animal has to meet before it can be said to possess this kind of "minimal mind", which, I argue, is the most basic kind of mind anything can have. Perhaps the most crucial condition is that the animal possess an "internal map" by which it steers itself around its environment. When I say "internal map", I mean an animal own internal representations (created by a process under its control) of its movement towards its goals. I argue that these representations: (i) track the truth, insofar as they correct their own mistakes; (ii) possess map-like features, as typical beliefs do (which is why I call them minimal maps); and (iii) incorporate both means and ends, making intentional agency possible. I argue that it is appropriate to characterise these representations as beliefs, and the animal's goals as its objects of desire.

The idea of ascribing "beliefs" to an animal even if it is completely lacking in subjective awareness may seem rather strange to many readers. However, I argue that no other term is appropriate to describe the way in which these animals steer themselves around their environment, using their internal "maps" to guide them (more about that in the third part of chapter two).

Finally, I claim that these "minimal minds" come in no less than four different varieties. In the end, I conclude that many insects and spiders, as well as octopuses (that's the correct plural, by the way; "octopi" is wrong) and squid, and of course fish, qualify as having minimal minds.

(3) Which creatures have emotions?

In chapter three, I argue that we can attribute certain kinds of emotions to animals: namely, those that help them to survive and/or flourish. I don't say anything terribly original in this chapter: instead, I argue that a theory of animal emotions that is capable of solving all of the philsophical problems relating to animal emotions is already available. Rather embarrassingly for philosophers, the author of this theory isn't a philosopher, but a neuroscientist named Jaak Panksepp, who has done a lot of research into animal emotions. Basically, Panksepp claims that the key to understanding animals' emotions lies in their brains, which have evolved over time and acquired several distinct patterns of responding to the various kinds of challenges in their environment, to help them survive. Each distinct pattern corresponds to a different kind of emotion. I argue that the key insights of Panksepp's theory can not only tell us what emotions are, but also how to identify them in animals.

I conclude that there is overwhelming evidence for at least seven kinds of basic emotions in mammals, and that simpler animals such as fish also appear to have a few basic emotions. In fact, anything with a minimal mind probably has emotions of some sort. That means that insects may have emotions too. This doesn't mean that insects have conscious feelings or anything. On the contrary, I argue that even in human beings, emotions are sometimes unconscious, and in some animals, emotions are always unconscious. (On this point, I part company with Panksepp and side with neurologists like LeDoux.) However, I also argue that no animal can be said to have emotions unless it is capable of having beliefs - at least, simple beliefs, of the kind I describe in chapter two, where an animal uses an internal map to steer itself around its environment. (This is another point where I disagree with Panksepp, who argues that feelings arose in the ancestors of today's animals long before they had beliefs or any other cognitive states.)

(4a) Which animals are subjectively aware (conscious) of the world around them?

This (for a lot of laypeople) is the $64 million question: which animals have subjective feelings (including pleasure and pain), or an "inner life", or consciousness, or awareness, or whatever you want to call it? I attempt to give an answer in the first part of chapter four.

The word "conscious" can sometimes simply mean "awake" as opposed to asleep, but in this chapter, when I use the word "conscious", I mean not just awake, but also subjectively aware of one's experiences - like the feeling of what it is like to experience the colour red. In any case, not many people realise that being "awake" can have two meanings: for some animals, it's just a bodily state of activity (virtually all animals rest at intervals and are active at other times), but a more select group of animals (mammals and birds) has brain states that correspond to being awake and asleep. (Interestingly, it turns out that this smaller group of animals are the ones that are capable of experiencing subjective awareness. Is this a coincidence? I don't think so, but you can decide for yourself when you look at the evidence.)

Philosophers, with their knack for splitting hairs, have distinguished umpteen different varieties of consciousness. I examine the distinctions, and conclude that most of them are irrelevant to animals and/or badly defined. After looking at the scientific evidence, I conclude that animal behaviour alone cannot tell us with certainty which animals are conscious or have subjective feelings. We need to look at animals' brains to settle that question.

The neurological evidence suggests pretty strongly that mammals have feelings. The evidence for birds is not as clear, but their brains are like those of mammals in some ways, and their behaviour is just as sophisticated as that of mammals, so I'm inclined to give them the benefit of the doubt. However, the neurological case for consciousness in reptiles is pretty weak, and we can be pretty sure that fish and frogs don't have any subjective feelings at all. Octopuses and squid just might, though.

Some philosophers make a big deal of the question of which animals have subjective feelings, but I argue they're wrong, and that actually, consciousness doesn't matter very much, ethically speaking. The distinction between living and non-living things, or even between creatures with minimal minds (such as insects) and those without, is, I argue, much more ethically important than the distinction between animals possessing subjective awareness (mammals and birds) and those lacking it.

(4b) Are there any non-human animals that qualify as rational?

Believe it or not, the best evidence for rationality in animals comes not from chimpanzees but from crows. The reader may have heard of the amazing exploits of Betty the crow, who proved to be capable of fashioning a hook from a piece of metal, with her beak, in order to retrieve a piece of meat - something she'd never seen done before. In the second part of chapter four, I examine this case and conclude that Betty acted rationally. Some philosophers would disagree: they argue that because Betty lacks the capacity for language, she cannot give a reason for her actions, and therefore cannot be said to act for a reason. (It's true that there are some birds, such as Alex the parrot, that can use human language to get what they want or distinguish colours and shapes, but not even Alex is smart enough to answer a question like "Why did you do that?") However, I claim that these philosophers have set the bar too high. A rational animal animal doesn't have to justify its actions to others, but only to itself. But we are still left with the question: how doe we recognise animals that can do this?

Briefly, I propose that any animal that behaves as if it had a model in its head of a tool it wants to create to achieve its goal (e.g. an animal that persists in trying to transform something into a suitable tool that would enable it to realise its ends) is rational. The down-side is that on the available evidence, most animals don't behave like Betty the crow. Experiments have shown that even monkeys are pretty dumb when it comes to understanding the connection between means and ends, so it looks as if hardly any animals qualify as rational. Tool manipulation is not the only kind of rationality, however. Some animals may have some kind of Machiavellian ability to socially manipulate other animals. At the present time, the jury is out on that question.

(4c) Are there any non-human animals that have a moral sense of right and wrong?

The short answer is: almost certainly not. I argue in the third part of chapter four that: (i) we cannot speak of morality in non-human animals unless parents can teach their offspring to observe moral norms (which, I go on to argue, presupposes that they are capable of attributing beliefs, including mistaken beliefs, to other individuals - a capacity that few if any non-human animals seem to possess); (ii) a moral agent must be capable of evaluating and improving her conduct over the entire course of her life (which is impossible without a detailed autobiographical memory of episodes from one's own past - something which only humans seem to possess); and (iii) being moral requires one to possess certain virtues, which in turn presupposes the ability to critically question one's own attitudes and correct those of others (impossible without a highly sophisticated kind of language, of the sort that only humans seem to possess).

(5) Which creatures do we have duties to, and what kinds of duties do we have towards them?

In chapter five, I argue against the idea that we have duties towards ecosystems as such, because ecosystems, unlike living organisms, do not possess the right kind of internal unity to qualify as having any kinds of interests (i.e. a master program, a hierarchical structure and embedded functionality, as described in chapter one). In other words, I am not a holist.

On the other hand, I argue that we have some duties towards all kinds of living organisms - even bacteria - because they all have various "goods of their own" which benefit them when realised. On this point, I side with those philosophers who call themselves biocentrists: I believe that every individual organism matters. However, I don't claim that we have the same duties to each and every kind of living thing, or that killing a bacterium is just as bad as killing a panda. Nor do I claim that the duties we have to other creatures are unconditional ones. (It would be pretty inconvenient if we had those kinds of duties, because then we couldn't kill the AIDS virus, or eat living things for food.) Rather, the duties we have are prima facie duties, which may be legitimately over-ridden under certain circumstances which I describe in chapter six, when our own interests are at stake.

Later, I argue that because animals can realise various kinds of goods that bacteria cannot (such as goods relating to practical knowledge, the companionship of other individuals, and play), they have more "dimensions of value" than bacteria. Thus killing an animal is a greater wrong than killing a microbe. I then argue that because humans are not only rational agents but also moral agents, the number of different kinds of "basic human goods" is much more extensive than the list of "basic animal goods". Hence the wrongful killing of a human being is much worse than the wrongful killing of an animal, because the human being (as a moral agent) is robbed of much more.

The reader might ask: what about animals that lack the capacity to feel? Are they worth less than animals with feelings? And what about human beings? Are babies or permanently comatose individuals less important than the rest of us, because they lack the capacity for moral agency? The short answer is: no, because it is a being's nature, rather than its capacities, that determines its moral value. (I discuss this question in more detail in appendices A and B to chapter 5.)

(6) What are human beings morally entitled to do to other creatures, while trying to advance their own interests? For instance, is it OK to eat animals for food, or perform experiments on them?

In the sixth and final chapter, I propose an ethical principle which defines what goods human beings are entitled to pursue. Specifically, I propose that human beings are morally entitled to pursue any of the "basic human goods" listed in chapter five, even at the expense of other organisms, provided they meet certain conditions in pursuing these goods. This is a restricted self-preference principle: it affirms that human beings are entitled to put their own interests first, while restricting the grounds on which people may do so to the pursuit of basic human goods.

Later, I list the conditions under which humans may pursue their own good, even at the expense of other life-forms. The gist is that these conditions are compatible with practices that have been vital to the survival and/or flourishing of humanity (e.g. hunting in subsistence communities; agriculture; and mass industrialisation), but not with practices that harm creatures but are not essential to human flourishing - for instance, suburban sprawl (which destroys bushland); city dwellers driving their cars to work every day instead of car-pooling or using public transport; the practice of hunting purely for sport; and meat-eating in an affluent society (excepting those individuals whose health would be endangered by going on a vegetarian diet). I also discuss the ethics of animal experimentation and xenotransplantation, but you'll have to read my e-book to see what I have to say about those issues!

All of this might sound as if animals have no rights against people, but I argue that they do in fact have certain moral rights. However, I contend that for the most part, these rights are not absolute.

Finally, I propose a principle which would allow humans to inflict harm on other kinds of organisms, in defence of an ecosystem. One example is the culling of populations of animals in an ecosystem which are growing at an unchecked rate.