FEAR NOT THE AI

GREGORY BENFORD

Emeritus professor of physics and astronomy, UC Irvine; novelist, Shipstar

广告:个人专属 VPN,独立 IP,无限流量,多机房切换,还可以屏蔽广告和恶意软件,每月最低仅 5 美元

AI need not be Frankenstein’s monster, and we can trust the naysayers to keep it that way. Plus, trust in our most mysterious ability—invention, originality.

Take self-driving cars. What are the chances that their guiding algorithm will suddenly, deliberately kill the passenger? Zero, if you’re smart in designing it. Fear of airplane and car crashes are a useful check on low-level AIs.

Why do people worry that future algorithms will be dangerous? Because they fear malicious programming, or maybe that algorithms have unforeseen implications that can hurt us—a plausible idea on the face of it, but not really.

Our fears are our best defense. No adventurous algorithm will escape the steely glare of its many skeptical inspectors. Any AI that has abilities in the physical world where we actually live will get a lot of inspection. Plus field trials, limited-use experience, the lot. That will stop runaway uses that could harm. Even so, we should realize that AIs, like many inventions, are in an arms race. Computer viruses were the first example, ever since I invented the first one in 1969. They race against virus detectors—but they’re mere pests, not lethal.

Smart sabotage algorithms (say, future versions of Stuxnet) already float through the netsphere and are far worse. These could quietly infiltrate many routine operations of governments and companies. Most would come from bad actors. But with genetic-programming and autonomous-agent software already out there, they could mutate and evolve by chance in Darwinian evolutionary fashion—especially where no one’s looking. They’ll get smarter still. Distributing the computation over many systems or networks would make it even harder to know how detected parts relate to some higher-order whole. So some might well escape the steely glare. But defensive algorithms can evolve, too, in Lamarckian fashion—and directed selection evolves faster. So the steely gaze has an advantage.

We humans are ugly, ornery, and mean, but we’re damned hard to kill—for a reason. We’ve prevailed against many enemies—predators, climate shocks, competition with other hominids—through hundreds of thousands of years, emerging as the most cantankerous species, feared by all others. The forest goes silent as we walk through it; we’re the top predator.

That gives us instincts and habits of mind revealed in matters seemingly benign, like soccer, American football, and countless other ball games. We love the pursuit and handling of small, jumpy balls that we struggle to control or capture. Why? Because we once did something like that for a living: hunting. Soccer is like running down a rabbit. Similar animal energies simmer just below the surface of our society. Any AI with ambitions to Take Over the World (the theme of many bad sci-fi movies) will find itself confronting an agile, angry, smart species on its own territory, the real material world, not the computational abstractions of 0s and 1s. My bet is on the animal nature.

Here’s the only real worry: Of course, we’ll get algorithms able to perform abstract actions better than humans. Many jobs have evaporated because of savvy software. But as AIs get smarter, will that destroy people’s self-confidence? That’s a real danger—but a small one, I think, for most of us (and especially for those reading this). Plenty of people have lost jobs to computers, though it’s never put that way by the Human Resources flunky who delivers the blow. Middle managers, secretaries, route planners for truck companies, the list is endless: They get replaced by software. But they seldom feel crushed. Mostly they move on to something else. We’ve learned to deal with that fairly well, without retreat into Luddite frenzy. But we can’t deal well with a threat only now looking like a small, distant, dark cloud on the far horizon: AIs that perform better than we do at the very highest levels.

This small cloud need not concern us now. It may never appear. Right now, we have trouble making an AI that passes the Turing Test. The future landscape will look clearer a decade or two from now, and then we can think about an AI that can solve, say, the general relativity / quantum mechanics riddle. Personally, I’d like to see a machine that takes on that task. Originality—the really hard part of being smart, and utterly not understood, even in humans—is, so far, utterly undemonstrated in AIs. Our unconscious seems integral to our creativity (we don’t have ideas; they have us), so should an AI have an unconscious? Maybe even clever programming and random evolution couldn’t produce one.

If that huge obstacle is surmounted someday and we get such an AI, I won’t fear it—I have some good questions to ask it.