Eugene McDermott Professor, Department of Brain and Cognitive Sciences, and director, Center for Brains, Minds, and Machines, MIT
Recent months have seen an increasingly public debate forming around the risks of artificial intelligence—in particular, AGI (artificial general intelligence). AI has been called by some (including the physicist Stephen Hawking) the top existential risk to humankind, and such recent films as Her and Transcendence have reinforced the message. Thoughtful comments by experts in the field—Rod Brooks and Oren Etzioni among them—have done little to settle the debate.
I argue here that research on how we think and on how to make machines that think is good for society. I call for research that integrates cognitive science, neuroscience, computer science, and artificial intelligence. Understanding intelligence and replicating it in machines goes hand in hand with understanding how the brain and the mind perform intelligent computations.
The convergence of and recent progress in technology, mathematics, and neuroscience has created a new opportunity for synergies across fields. The dream of understanding intelligence is an old one, yet—as the debate around AI shows—now is an exciting time to pursue this vision. We’re at the beginning of a new and emerging field: the science and engineering of intelligence, an integrated effort that will ultimately make fundamental progress with great value to science, technology, and society. We must push ahead with this research, not pull back.
The problem of intelligence—what it is, how the human brain generates it, and how to replicate it in machines—is one of the great problems in science and technology, together with the problem of the origin of the universe and the nature of space and time. It may be the greatest problem of all, because it’s the one with a large multiplier effect—almost any progress on making ourselves smarter or developing machines that help us think better will lead to advances in the other great problems of science and technology.
Research on intelligence will eventually revolutionize education and learning. Systems that recognize how culture influences thinking could help avoid social conflict. The work of scientists and engineers could be amplified to help solve the world’s most pressing technical problems. Mental health could be understood on a deeper level, so that we might find better ways to intervene. In summary, research on intelligence will help us understand the human mind and brain, build more-intelligent machines, and improve the mechanisms for collective decisions. These advances will be critical to the future prosperity, education, health, and security of our society. This, again, is the time to greatly expand research on intelligence, not withdraw from it.
We’re often misled by “big,” somewhat ill-defined, long-used words. Nobody so far has been able to give a precise, verifiable definition of what general intelligence or thinking is. The only definition I know that, though limited, can be practically used is Alan Turing’s. With his test, Turing provided an operational definition of a specific form of thinking—human intelligence.
Let’s then consider human intelligence as defined by the Turing Test. It’s becoming increasingly clear that there are many facets of human intelligence. Consider, for instance, a Turing Test of visual intelligence—that is, questions about an image, a scene, which may range from “What is there?” to “Who is there?” to “What is this person doing?” to “What is this girl thinking about this boy?”—and so on. We know by now, from recent advances in cognitive neuroscience, that answering these questions requires different competencies and abilities, often independent from one another, often corresponding to separate modules in the brain. The apparently similar questions of object- and face-recognition (“What is there?” versus “Who is there?”) involve rather distinct parts of the visual cortex. The word “intelligence” can be misleading in this context, like the word life was during the first half of the last century, when popular scientific journals routinely wrote about the problem of life as if there were a single substratum of life waiting to be discovered that would unveil the mystery.
Speaking today about “the problem of life” sounds amusing: Biology is a science dealing with many different great problems, not just one. Intelligence is one word but many problems—not one but many Nobel prizes. This is related to Marvin Minsky’s view of the problem of thinking, captured by his slogan “Society of Mind.” In the same way, a real Turing Test is a broad set of questions probing the main aspects of human thinking. For this reason, my colleagues and I are developing the framework around an open-ended set of Turing+ questions in order to measure scientific progress in the field. The plural “questions” emphasizes the many different intelligent abilities to be characterized and possibly replicated in a machine—basic visual recognition of objects, the identification of faces, the gauging of emotions, social intelligence, language, and much more. The “Turing+” emphasizes that a quantitative model must match human behavior and human physiology—the mind and the brain. The requirements are thus well beyond the original Turing Test; an entire scientific field is needed to make progress on understanding them and developing the related technologies of intelligence.
Should we be afraid of machines that think?
Since intelligence is a whole set of solutions to independent problems, there’s little reason to fear the sudden appearance of a superhuman machine that thinks, though it’s always better to err on the side of caution. Of course, each of the many technologies that are emerging and will emerge over time in order to solve the different problems of intelligence is likely to be powerful in itself—and therefore potentially dangerous in its use and misuse, as most technologies are.
Thus, as is the case in other parts of science, proper safety measures and ethical guidelines should be in place. Also, there’s probably a need for constant monitoring (perhaps by an independent multinational organization)—of the supralinear risk created by the combination of continuously emerging technologies of intelligence. All in all, however, not only am I unafraid of machines that think, but I find their birth and evolution one of the most exciting, interesting, and positive events in the history of human thought.