THINKING DOES NOT IMPLY SUBJUGATING

STEVEN PINKER

Johnstone Family Professor, Department of Psychology, Harvard University; author, The Sense of Style: The Thinking Person’s Guide to Writing in the Twenty-First Century

广告:个人专属 VPN,独立 IP,无限流量,多机房切换,还可以屏蔽广告和恶意软件,每月最低仅 5 美元

Thomas Hobbes’s pithy equation of reasoning as “nothing but reckoning” is one of the great ideas in human history. The notion that rationality can be accomplished by the physical process of calculation was vindicated in the twentieth century by Alan Turing’s thesis that simple machines can implement any computable function, and by models from D. O. Hebb, Warren McCulloch, and Walter Pitts and their scientific heirs showing that networks of simplified neurons could achieve comparable feats. The cognitive feats of the brain can be explained in physical terms: To put it crudely (and critics notwithstanding), we can say that beliefs are a kind of information, thinking a kind of computation, and motivation a kind of feedback and control.

This is a great idea for two reasons. First, it completes a naturalistic understanding of the universe, exorcising occult souls, spirits, and ghosts in the machine. Just as Darwin made it possible for a thoughtful observer of the natural world to do without creationism, Turing and others made it possible for a thoughtful observer of the cognitive world to do without spiritualism.

Second, the computational theory of reason opens the door to artificial intelligence—to machines that think. A human-made information processor could, in principle, duplicate and exceed the powers of the human mind. Not that this is likely to happen in practice, since we’ll probably never see the sustained technological and economic motivation necessary to bring it about. Just as inventing the car did not involve duplicating the horse, developing an AI system that could pay for itself won’t require duplicating a specimen of Homo sapiens. A device designed to drive a car or predict an epidemic need not be designed to attract a mate or avoid putrid carrion.

Nonetheless, recent baby steps toward more intelligent machines have led to a revival of the recurring anxiety that our knowledge will doom us. My own view is that current fears of computers running amok are a waste of emotional energy—that the scenario is closer to the Y2K bug than the Manhattan Project.

For one thing, we have a long time to plan for this. Human-level AI is still the standard fifteen to twenty-five years away, just as it always has been, and many of its recently touted advances have shallow roots. It’s true that in the past, “experts” have comically dismissed the possibility of technological advances that quickly happened. But this cuts both ways: “Experts” have also heralded (or panicked over) imminent advances that never happened, like nuclear-powered cars, underwater cities, colonies on Mars, designer babies, and warehouses of zombies kept alive to provide people with spare organs.

Also, it’s bizarre to think that roboticists will not build in safeguards against harm as they proceed. They wouldn’t need any ponderous “rules of robotics” or some newfangled moral philosophy to do this, just the same common sense that went into the design of food processors, table saws, space heaters, and automobiles. The worry that an AI system would get so clever at attaining one of its programmed goals (like commandeering energy) that it would run roughshod over the others (like human safety) assumes that AI will descend upon us faster than we can design fail-safe precautions. The reality is that progress in AI is hype-defyingly slow, and there will be plenty of time for feedback from incremental implementations, with humans wielding the screwdriver at every stage.

Would an artificially intelligent system deliberately disable these safeguards? Why would it want to? AI dystopias project a parochial alpha-male psychology onto the concept of intelligence. They assume that superhumanly intelligent robots would develop goals like deposing their masters or taking over the world. But intelligence is the ability to deploy novel means to attain a goal; the goals are extraneous to the intelligence itself. Being smart is not the same as wanting something. History does turn up the occasional megalomaniacal despot or psychopathic serial killer, but these are products of a history of natural selection shaping testosterone-sensitive circuits in a certain species of primate, not an inevitable feature of intelligent systems. It’s telling that many of our techno-prophets don’t entertain the possibility that artificial intelligence will naturally develop along female lines—fully capable of solving problems but with no desire to annihilate innocents or dominate the civilization.

We can imagine a malevolent human who designs and releases a battalion of robots to sow mass destruction. But disaster scenarios are cheap to play out in the imagination, and we should keep in mind the chain of probabilities that would have to unfold before this one became a reality. An evil genius would have to arise, possessed of both a thirst for pointless mass murder and a brilliance in technological innovation. He would have to recruit and manage a team of co-conspirators that exercised perfect secrecy, loyalty, and competence. And the operation would have to survive the hazards of detection, betrayal, stings, blunders, and bad luck. In theory it could happen, but we have more pressing things to worry about.

Once we put aside the sci-fi disaster plots, the possibility of advanced artificial intelligence is exhilarating—not just for the practical benefits, like the fantastic gains in safety, leisure, and environment-friendliness of self-driving cars but also for the philosophical possibilities. The computational Theory of Mind has never explained the existence of consciousness in the sense of first-person subjectivity (though it’s perfectly capable of explaining the existence of consciousness in the sense of accessible and reportable information). One suggestion is that subjectivity is inherent to any sufficiently complicated cybernetic system. I used to think this hypothesis was permanently untestable (like its alternatives). But imagine an intelligent robot programmed to monitor its own systems and pose scientific questions. If, unprompted, it asked about why it itself had subjective experiences, I’d take the idea seriously.