Author, Machines Who Think and The Universal Machine; coauthor (with Edward A. Feigenbaum), The Fifth Generation: Artificial Intelligence & Japan’s Computer Challenge to the World
For more than fifty years I’ve watched the ebb and flow of public opinion about artificial intelligence: It’s impossible, can’t be done; it’s significant; it’s negligible; it’s a joke; it will never be strongly intelligent, only weakly so; it will destroy the human species. These extremes have lately given way to an acknowledgment that AI is an epochal scientific, technological, and social—human—event. We’ve developed a new mind to live side by side with ours. If we handle it wisely, it can bring immense benefits, from the global to the personal.
One of AI’s futures is imagined as a wise and patient Jeeves to our mentally inferior Bertie Wooster selves (“Jeeves, you’re a wonder.” “Thank you, sir. We do our best.”). This is possible, certainly desirable; we can use the help. Chess offers a model: Grandmasters Garry Kasparov and Hans Berliner have both declared publicly that chess programs find moves that humans wouldn’t and are teaching human players new tricks. If Deep Blue beat Kasparov when he was one of the strongest world champion chess players ever, he and most observers believe that even better chess is played by teams of humans and machines combined. Is this a model of our future relationship with smart machines? Or is it only temporary, while the machines push closer to a blend of our kind of smarts plus theirs? We don’t know. In speed, breadth, and depth, the newcomer is likely to exceed human intelligence. It already has, in many ways.
No novel science or technology of such magnitude arrives without disadvantages, even perils. To recognize, measure, and meet them is a task of grand proportions. That task has already been taken up formally by experts in the field—philosophers, ethicists, legal scholars, and others trained to explore values beyond simple visceral reactions—in a project called AI100, based at Stanford University. No one expects easy or final answers, so the task will be long and continuous, funded, for a century, by one of AI’s leading scientists, Eric Horvitz, who, with his wife Mary, conceived this unprecedented study.
Since we can’t seem to stop the pursuit of AI, since our literature tells us we’ve imagined, yearned for, an extrahuman intelligence for as long as we have records, the enterprise must be impelled by the deepest, most persistent of human drives. These beg for explanation. After all, this isn’t exactly the joy of sex.
Any scientist will say it’s the search to know. “It’s foundational,” an AI researcher told me recently. “It’s us looking out at the world, and how we do it.” He’s right. But there’s more.
Some say we do it because it’s there, an Everest of the mind. Others, more mystical, say we’re propelled by teleology: We’re a mere step in the evolution of intelligence in the universe, attractive even in our imperfections but hardly the last word.
Entrepreneurs will say that this is the future of making things—the dark factory, with unflagging, unsalaried, uncomplaining robot workers—although what currency postemployed humans will use to acquire those robot products, no matter how cheap, is a puzzle to be solved.
Here’s my belief: We long to preserve ourselves as a species. For all the imaginary deities we’ve petitioned throughout history who have failed to protect us—from nature, from one another, from ourselves—we’re finally ready to call on our own enhanced, augmented minds instead. It’s a sign of social maturity that we take responsibility for ourselves. We are as gods, Stewart Brand famously said, and we may as well get good at it.
We’re trying. We could fail.