Professor of cognitive robotics, Imperial College London; author, Embodiment and the Inner Life
Just suppose we could endow a machine with human-level intelligence, that is to say, with the ability to match a typical human being in every (or almost every) sphere of intellectual endeavor, and perhaps to surpass every human being in a few. Would such a machine necessarily be conscious? This is an important question, because an affirmative answer would bring us up short. How would we treat such a thing if we built it? Would it be capable of suffering or joy? Would it deserve the same rights as a human being? Should we bring machine consciousness into the world at all?
The question of whether a human-level AI would necessarily be conscious is also a difficult one. One source of difficulty is the fact that multiple attributes are associated with consciousness in humans and other animals. All animals exhibit a sense of purpose. All (awake) animals are, to a greater or lesser extent, aware of the world they inhabit and the objects it contains. All animals, to some degree or other, manifest cognitive integration, which is to say they can bring all their mental resources—perceptions, memories, and skills—to bear on the ongoing situation in pursuit of their goals. In this respect, every animal displays a kind of unity, a kind of selfhood. Some animals, including humans, are also aware of themselves—of their bodies and the flow of their thoughts. Finally, most, if not all, animals are capable of suffering, and some are capable of empathy with the suffering of others.
In (healthy) humans, all these attributes come together as a package. But in an AI they can potentially be separated. So our question must be refined. Which, if any, of the attributes we associate with consciousness in humans is a necessary accompaniment to human-level intelligence? Well, each of the attributes listed (and the list is surely not exhaustive) deserves a lengthy treatment of its own. So let me pick just two—namely, awareness of the world and the capacity for suffering. Awareness of the world, I would argue, is indeed a necessary attribute of human-level intelligence.
Surely nothing would count as having human-level intelligence unless it had language, and the chief use of human language is to talk about the world. In this sense, intelligence is bound up with what philosophers call intentionality. Moreover, language is a social phenomenon, and a primary use of language within a group of people is to talk about the things they can all perceive (such as this tool or that piece of wood), or have perceived (yesterday’s piece of wood), or might perceive (tomorrow’s piece of wood, maybe). In short, language is grounded in awareness of the world. In an embodied creature or a robot, such an awareness would be evident from its interactions with the environment (avoiding obstacles, picking things up, and so on). But we might widen the conception to include a distributed, disembodied artificial intelligence equipped with suitable sensors.
To convincingly count as a facet of consciousness, this sort of world-awareness would perhaps have to go hand-in-hand with a manifest sense of purpose and a degree of cognitive integration. So perhaps this trio of attributes will come as a package even in an AI. But let’s put that question aside for a moment and get back to the capacity for suffering and joy. Unlike world-awareness, there’s no obvious reason to suppose that human-level intelligence must have this attribute, even though it’s intimately associated with consciousness in humans. We can imagine a machine carrying out, coldly and without feeling, the full range of tasks requiring intellect in humans. Such a machine would lack the attribute of consciousness that counts most when it comes to according rights. As Jeremy Bentham noted, when considering how to treat nonhuman animals, the question is not whether they can reason or talk but whether they can suffer.
There’s no suggestion here that a “mere” machine could never be capable of suffering or joy—that there’s something special about biology in this respect. The point, rather, is that the capacity for suffering and joy can be dissociated from other psychological attributes bundled together in human consciousness. But let’s examine this apparent dissociation more closely. I already mooted the idea that worldly awareness might go hand-in-hand with a manifest sense of purpose. An animal’s awareness of the world, of what the world affords for good or ill (in J. J. Gibson’s terms), subserves its needs. An animal shows an awareness of a predator by moving away from it, and an awareness of a potential prey by moving toward it. Against the backdrop of a set of goals and needs, an animal’s behavior makes sense. And against such a backdrop, an animal can be thwarted, its goals unattained and its needs unfulfilled. Surely this is the basis for one aspect of suffering.
What of human-level artificial intelligence? Wouldn’t a human-level AI necessarily have a complex set of goals? Couldn’t its attempts to achieve its goals be frustrated, thwarted at every turn? Under those harsh conditions, would it be proper to say that the AI was suffering, even though its constitution might make it immune from the sort of pain or physical discomfort humans know?
Here the combination of imagination and intuition runs up against its limits. I suspect we won’t find out how to answer this question until confronted with the real thing. Only when more sophisticated AI is a familiar part of our lives will our language games adjust to such alien beings. But of course by that time it may be too late to change our minds about whether they should be brought into the world. For better or worse, they’ll already be here.