LEVERAGING HUMAN INTELLIGENCE

KEITH DEVLIN

Mathematician; executive director, H-STAR Institute, Stanford University; author, The Man of Numbers: Fibonacci’s Arithmetic Revolution

广告:个人专属 VPN,独立 IP,无限流量,多机房切换,还可以屏蔽广告和恶意软件,每月最低仅 5 美元

I know many machines that think. They’re people. Biological machines. (Be careful of that last phrase, “biological machines.” It’s a convenient way to refer to stuff we don’t fully understand in a way that suggests we do.) In contrast, I’ve yet to encounter a digital-electronic, electromechanical machine that behaves in a fashion that would merit the description “thinking,” and I see no evidence to suggest that such may even be possible. HAL-like devices that will eventually rule us are, I believe, destined to remain in the realm of science fiction. Just because something waddles like a duck and quacks doesn’t make it a duck. And if a machine exhibits some features of thinking (e.g., decision making), that doesn’t make it a thinking machine.

We humans are suckers for being seduced by the “if it waddles and quacks, it’s a duck” syndrome. Not because we’re stupid; rather, because we’re human. The very features that allow us to act most of the time in our best interests when faced with potential information overload in complex situations leave us wide open for such seduction.

Many years ago, I visited a humanoid robotics lab in Japan. It looked like a typical engineering Skunk Works. In one corner was a metallic skeletal device, festooned with electrical wires, which had the rough outline of a human upper torso. The sophisticated-looking arms and hands were, I assume, the focus of much of the engineering research, but they weren’t active during my visit, and it was only later that I really noticed them. My entire attention when I walked in, and for much of my time there, was taken up by the robot’s head. Actually, it wasn’t a head at all, just a metal frame with a camera where the nose and mouth would be. Above the camera were two white balls, about the size of Ping-Pong balls (which may be what they were), with black pupils painted on them. Above the eyeballs were two large paperclips, serving as eyebrows.

The robot was programmed to detect the motion of people and pick up sound sources (from someone speaking). It would move its head and its eyeballs to follow anyone who moved, and it would raise and lower its paperclip eyebrows when the target individual was speaking.

What was striking was how alive and intelligent the device seemed. Sure, I and everyone else in the room knew exactly what was going on and how simple the mechanism was that controlled the robotic “gaze” and the paperclip eyebrows. It was a trick. But it was a trick that tapped deep into hundreds of thousands of years of human social and cognitive development, so our natural response was the one normally elicited by another person.

It wasn’t even that I was unaware of how the trick worked. My then Stanford colleague and friend, the late Clifford Nass, had done hundreds of hours of research showing how we humans are genetically programmed to ascribe intelligent agency based on a few simple interaction clues—reactions so deep and ingrained that we cannot eliminate them. There probably was some sophisticated AI that could control the robot’s arms and hands, but the eyes and eyebrows were controlled by a very simple program. Even so, that behavior was enough to give me the clear sense that the robot was a curious, intelligent participant, able to follow what I said. What it was doing, of course, was leveraging my humanity and my intelligence. It wasn’t thinking.

Leveraging human intelligence is all well and good if the robot is to clean your house, book your airline tickets, or drive your car. But would you want such a machine to serve on a jury, make a crucial decision regarding a hospital procedure, or have control over your freedom? I certainly wouldn’t.

So when you ask me what I think about machines that think, my answer is that for the most part I like them, because they’re people (and perhaps also various other animals). What worries me is the increasing degree to which we’re ceding aspects of our lives to machines that decide, often much more effectively and reliably than people can, but definitely don’t think. There’s the danger: machines that can make decisions but don’t think.

Decision making and thinking aren’t the same, and we shouldn’t confuse them. When we deploy decision-making systems in matters of national defense, health care, and finance, as we do, the potential dangers of such confusion, both for individuals and for society, are particularly high. To guard against those dangers, it helps to be aware that we’re genetically programmed to act in trustful, intelligent-agency-ascribing ways in certain kinds of interactions, be they with people or machines. But sometimes a device that waddles and quacks is just a device. It ain’t no duck.