on animate intelligence
and the cost of denying our animal nature (Symposium on Animals: Part I)
Note to our readers: this is the first in a series of essays on animals in the coming weeks, including one by a guest author. In the next essay, Tara considers animals and irony.
Not long ago, artificial intelligence (AI) seemed to be primarily a theoretical question for academics and researchers to study. Now, with the advent of large language models (LLMs) like those behind the interactive text service ChatGPT and the image generator Midjourney, AI is a significant social phenomenon. Like other teachers of first-year college humanities courses, I will be spending some time this summer thinking about whether the ready availability of AI should change the way I present the basics of argumentative writing.1
But this striking social phenomenon is also an intellectual one, with breathless claims by technologists that large language models represent a brave new world of artificial intelligence as well as a cadre of philosophers, journalists, and cultural critics willing to entertain or endorse their claims.2
Yet versions of these claims have been made for decades. Nearly fifty years ago, John Searle, in response, presented a tidy argument against ‘Strong AI’, the idea that a system that imitates language use accurately enough through symbolic manipulation should count as understanding the meaning of its outputs.
Searle’s thought experiment imagines a person, isolated in a room and ignorant of a certain language, who reliably follows a book of instructions that convincingly simulates an appropriate output in that language on being given a certain input. Neither the person nor the book (a program or algorithm) seems capable of understanding anything in the language in question. Yet the linguistic output might well fool us into thinking otherwise.
My point in bringing up Searle’s half-century-old argument is not to dismiss the new generation of AI, but rather to note that the paradigm in AI research of simulating competence has not changed, despite the novel scale of the computational resources that LLMs employ. As a result, many debates over artificial intelligence circles back to a background assumption, namely, that thinking is itself a computational process.
Advances in empirical cognitive science, especially studies of memory, perception, attention, and decision-making, have made this computational view increasingly attractive to maintain. Moreover, thinking about the mind and the brain in this way seems to reflect a sober scientific naturalism, a departure from the metaphysically disreputable speculations of philosophers and religionists.
How, then, can we resist the claim that we encounter genuine intelligence in the programs that AI researchers are developing? And if we manage to do so now, when we use AI software on our wonted phones and laptops, then what about in a future where our lives are pervaded, physically, by robotic assistants?
The answer to this challenge, I believe, lies in the world we share with animals.
When I say that we inhabit a world shared with animals, I mean to include the animals that we are. We not only enter (and leave) the world as animals, but persist, exist, as animals in the time between. This evident fact is easily forgotten.
Part of keeping the fact of our being animals in view is resisting the urge to see ourselves solely as minds. If Aquinas was right, there are in fact beings who just are minds: the Angels. We are akin to them insofar as we are minded creatures. But our way of being minded is by being a minded animal.
For Aquinas, only the human is a minded animal. But that is because he means something very specific by the mind or the intellect, namely, a principle by which one can understand the very natures of things. Memory, imagination, even experience and prudence — Aquinas is quite willing to say non-human animals enjoy these powers.
In any case, many animals are evidently intelligent in the sense that they behave purposefully, acquire new abilities, adapt their behavior to a dynamic grasp of their environment, and so on. The mechanistic view of Descartes, who took animals to simply be automata, shows that we can get confused about the evident intelligence of animals, too.
As Alasdair MacIntyre argues in his lovely little book Dependent Rational Animals, we can keep this confusion at bay by attending to the intelligent animals that we are most like. On this list are not only other primates, with whom we share a close evolutionary history, but also dolphins and elephants, creatures who are richly social, engage in play, and experience complex emotions.
MacIntyre makes the further point that we learn a great deal about ourselves in giving our attention to the lives of these creatures. Philosophers have tended to dwell on consciousness or self-consciousness as the marks of intelligence, whatever these exactly are. When we look at other intelligent animals, these notions fade in importance by comparison to the social and communicative powers they display.
In this regard, MacIntyre notes, a child is very much a (human) animal, possessed of these social and communicative powers even before it has language or concepts. After all, we do not become something we are not in attaining developmental maturity. Rather these powers achieve a fuller mode of expression when we are able to use them linguistically and conceptually. Nor do the earlier possibilities disappear. Much of what we do remains tied closely to being the animals we first were in childhood, which are the same as the animals we are now.
The figure who looms behind both Aquinas’s and MacIntyre’s picture of the human animal is, of course, Aristotle. One of the key ideas in Aristotle’s account of what we are as animals, perhaps less celebrated than his exposition of the systematic organization of animal bodies or his subtle reflections on animal cognition — and considerably harder for us to understand — is that thinking is life.
It is this idea — that thinking is life — that will equip us, with as much sober naturalism as the best research in cognitive psychology, to resist the assumption that drives AI researchers, that thinking is essentially computational.
Two words of caution are needed at this point, one philosophical and one historical.
The philosophical word of caution is this: to affirm that thinking is life is to accept a certain picture of things. What I mean by that is that one does not come to be convinced of this claim by accepting premises that are more basic than it. Rather, it is a basic claim itself. But to accept it is not to take a leap of faith either. Rather, we can come to accept this view of things by noticing how it helps to organize other parts of our thought in ways that give them clarity and a more intelligible grounding.
The historical word of caution is this: Aristotle did not exactly say thinking is life. Rather, what he said was that life is thinking or perceiving (Nicomachean Ethics IX.9). Perception picks out the mode of cognition he thinks we share with other animals. But since I accept MacIntyre’s view that we risk drawing too sharp a boundary between us and the children we were if we divide off these modes of cognition, I have used a broader notion of thinking to cover both dimensions. I also find it useful to invert the expression, since what Aristotle is saying is not that when we are living we are engaged in thinking, but rather that thinking is a mode of life or a characteristic way that life manifests itself.
Now, the category of life, in the Aristotelian sense, brings with it a host of other notions without which it cannot be sustained in our thinking. Of these, the notions of flourishing and life-activity are among the most important. When we know that something is alive, we know also that it might succeed in some way, and that its success or flourishing consists in engaging well in its life-activities. That is true of the most primitive moss as well as being true of dolphins and elephants and us, creatures who flourish in a social and intelligent way because their lives are characteristically social and intelligent.
To declare that thinking is life is to say that thinking is the sort of life-activity some of whose successful instances might help explain why a living thing is flourishing. Aristotle thought that our flourishing could involve both practical and contemplative modes of thinking. Practical thinking always strives for something outside itself, while contemplative thinking is self-contained. But, for Aristotle, both alike are part of the goal-directed structures of life, insofar as they are goals for minded animals like us.
With the more capacious notion of thinking, we can say, simply, that thinking is life for minded animals, including the perceptual cognition common to all animal life. It is an activity that characterizes and pervades animal life — even our sleep, when we are least active, is the sleep of minded animals. (Think, too, of the dog chasing after something or another in its dreams, its limbs twitching accordingly.)
If we accept that thinking is life, a way that life is manifest, questions about the relationship between computation and thinking take on a different cast altogether. Issues about semantics, intentionality, and consciousness will seem less important when assessing apparently thinking machines. Rather, what we want to know is what they are doing with what looks like a cognitive apparatus and what they are doing it for. What is the life that such (apparent) thinking characterizes?
Surely, ChatGPT and Midjourney, whatever else they are, do not have lives in the ways that minded animals do. One question, then, is whether thinking could be anything else, in addition to its being life. On the Aristotelian picture I have sketched, the computational processes that underlie the responsiveness of these engines, just like the computational processes that take place in the bodies and brains of minded animals, are the tools or instruments of life, not life itself. But, in the case of AI, the lives in question are ours, not theirs.
Of course, the issue of robotic assistants, equipped with sensor and motor apparatus, arises once more. Would such complex creations count as having a life of their own? That would depend, not on their having bodies, or being capable of complex behavior, or being capable of reproduction or any number of other features that are traditionally attributed to life, but rather, purely, on whether their success is manifested in their life-activities. In other words, are these creatures for themselves or for us? For a creature to be for oneself excludes being designed by and for another creature.3 We would have to enter much further into science fiction before any putative AI exceeds us in this way.
I have tried to show here that reflection on what we share with animals, or better, what we share as animals, can lead us past a view of thinking as something essentially computational. The appeal of this disembodied view of thinking may even depend on denying our animality.
If we are instead simply consciousness, then not only might we one day be able to upload and thereby preserve ourselves — another fantasy of the techno-utopians — but our bodies would turn out to be shells or appendages, rather than what we are. There is a great deal to say about the costs of such a view of ourselves, but one is the temptation to aspire to or hope for forms of liberation that are, in fact, self-annihilation. In my view, that is the real existential danger of AI.
The big question for teachers, of course, is whether to invite the use of AI in some way or instead to banish it.
Take for example, only most recently, the claim by two philosophers that so-called generative agents “may be the first members of a new moral community”, despite existing as representations in text files.
An attentive reader might notice the importance of my saying another creature and not simply another.