THINKING ABOUT PEOPLE WHO THINK LIKE MACHINES

HAIM HARARI

Physicist; former president, Weizmann Institute of Science; author, A View from the Eye of the Storm

广告:个人专属 VPN,独立 IP,无限流量,多机房切换,还可以屏蔽广告和恶意软件,每月最低仅 5 美元

When we say “machines that think,” we really mean “machines that think like people.” It’s obvious that in many different ways machines do think: They trigger events, process things, take decisions, make choices, and perform many (but not all) other aspects of thinking. But the real question is whether machines can think like people, the age-old test of artificial intelligence: You observe the result of the thinking and you cannot tell whether it was done by a machine or a human.

Some prominent scientific gurus are scared by a world controlled by thinking machines. I’m not sure this is a valid fear. I’m more concerned about a world led by people who think like machines, a major emerging trend of our digital society.

You can teach a machine to track an algorithm and perform a sequence of operations that follow logically from one another. It can do so faster and more accurately than any human. Given well-defined basic postulates or axioms, pure logic is the strong suit of the thinking machine. But exercising common sense in making decisions and being able to ask meaningful questions are, so far, the prerogative of humans. Merging intuition, emotion, empathy, experience, and cultural background—and using all of these to ask a relevant question and draw conclusions by combining seemingly unrelated facts and principles—are trademarks of human thinking not yet shared by machines.

Our human society is moving fast toward rules, regulations, laws, investment vehicles, political dogmas, and patterns of behavior that blindly follow strict logic, even when it starts with false foundations or collides with obvious common sense. Religious extremism has always progressed on the basis of some absurd axioms, leading logically to endless harsh consequences. Several disciplines—such as law, accounting, and certain areas of mathematics and technology—augmented by bureaucratic structures and by media that idolize inflexible regulators, often lead to opaque principles like “total transparency” and tolerance toward intolerant acts. These and similar trends are moving us toward more algorithmic and logical modes of tackling problems, often at the expense of common sense. If common sense, whatever its definition, describes one of the advantages of people over machines, what we see today is a clear move away from this incremental asset of humans.

Unfortunately, the gap between machine thinking and human thinking can narrow in two ways, and when people begin to think like machines, we automatically achieve the goal of “machines that think like people,” reaching it from the wrong direction. A very smart person, reaching conclusions on the basis of one line of information in a split second between dozens of e-mails, text messages, and tweets (not to speak of other digital disturbances), is not superior to a machine with a moderate intelligence that analyzes a large amount of relevant information before it jumps to premature conclusions or signs a public petition about a subject it’s unfamiliar with.

One can recite hundreds of examples of this trend. We all support the law that every new building should allow total access to people with special needs, while old buildings may remain inaccessible until they’re renovated. But does it make sense to disallow a renovation of an old bathroom to offer such access because a new elevator cannot be installed? Or to demand full public disclosure of all CIA or FBI secret sources to enable a court of law to sentence a terrorist who obviously murdered hundreds of people? Or to demand parental consent before giving a teenager an aspirin at school? And when school texts are converted from the use of miles to kilometers, the sentence “From the top of the mountain, you can see for approximately 100 miles” is translated, by a person, into “You can see for approximately 160.934 km.”

The standard sacred cows of liberal democracy rightfully include a wide variety of freedoms: freedom of speech, freedom of the press, academic freedom, freedom of religion (or of lack of religion), freedom of information, and numerous other human rights, including equal opportunity, equal treatment under law, and absence of discrimination. We all support these principles, but pure and extreme logic induces us, against common sense, to insist mainly on the human rights of criminals and terrorists, because the human rights of the victims “are not an issue.” Transparency and freedom of the press logically demand complete reports on internal brainstorming sessions in which delicate issues are pondered, thus preventing any free discussion and raw thinking in certain public bodies. Academic freedom might logically be misused, against common sense and against factual knowledge, to teach about Noah’s Ark as an alternative to evolution, to deny the Holocaust in teaching history, or to preach for a universe created 6,000 years ago (rather than 14 billion) as the basis of cosmology. We can go on and on with examples, but the message is clear.

Algorithmic thinking, brevity of messages, and overexertion of pure logic are moving us into machine thinking, rather than slowly and wisely teaching our machines to benefit from our common sense and intellectual abilities. A reversal of this trend would be a meaningful U-turn in human digital evolution.