SELF-AWARE AI? NOT IN 1,000 YEARS!

ROLF DOBELLI

Founder, Zurich Minds; journalist; author, The Art of Thinking Clearly

广告:个人专属 VPN,独立 IP,无限流量,多机房切换,还可以屏蔽广告和恶意软件,每月最低仅 5 美元

The widespread fear that AI will endanger humanity and take over the world is irrational. Here’s why.

Conceptually, autonomous or artificial intelligence systems can develop in two ways: either as an extension of human thinking or as radically new thinking. Call the first “Humanoid Thinking,” or Humanoid AI, and the second “Alien Thinking,” or Alien AI.

Almost all AI today is Humanoid Thinking. We use AI to solve problems too difficult, time-consuming, or boring for our limited brains to process: electrical-grid balancing, recommendation engines, self-driving cars, face recognition, trading algorithms, and the like. These artificial agents work in narrow domains with clear goals their human creators specify. Such AI aims to accomplish human objectives—often better, with fewer cognitive errors, distractions, outbursts of bad temper, or processing limitations. In a couple of decades, AI agents might serve as virtual insurance sellers, doctors, psychotherapists, and maybe even virtual spouses and children.

But such AI agents will be our slaves, with no self-concept of their own. They’ll happily perform the functions we set them up to do. If screwups happen, they’ll be our screwups, due to software bugs or overreliance on these agents (Dan Dennett’s point). Yes, Humanoid AIs might surprise us once in a while with novel solutions to specific optimization problems. But in most cases novel solutions are the last thing we want from AI (creativity in nuclear-missile navigation, anyone?). That said, Humanoid AI solutions will always fit a narrow domain. They’ll be understandable, either because we understand what they achieve or because we understand their inner workings. Sometimes the code will become too enormous and fumbled for one person to understand, because it’s continually patched. In these cases, we can turn it off and program a more elegant version. Humanoid AI will bring us closer to the age-old aspiration of having robots do most of the work while humans are free to be creative—or amused to death.

Alien Thinking is radically different. Alien Thinking could conceivably become a danger to Humanoid Thinking; it could take over the planet, outsmart us, outrun us, enslave us—and we might not even recognize the onslaught. What sort of thinking will Alien Thinking be? By definition, we can’t tell. It will encompass functionality we cannot remotely understand. Will it be conscious? Most likely, but it needn’t be. Will it experience emotion? Will it write bestselling novels? If so, bestselling to us or bestselling to it and its spawn? Will cognitive errors mar its thinking? Will it be social? Will it have a Theory of Mind? If so, will it make jokes, will it gossip, will it worry about its reputation, will it rally around a flag? Will it create its own version of AI (AI-AI)? We can’t say.

All we can say is that humans cannot construct truly Alien Thinking. Whatever we create will reflect our goals and values, so it won’t stray far from human thinking. You’d need real evolution, not just evolutionary algorithms, for self-aware Alien Thinking to arise. You’d need an evolutionary path radically different from the one that led to human intelligence and Humanoid AI.

So, how do you get real evolution to kick in? Replicators, variation, and selection. Once these three components are in place, evolution arises inevitably. How likely is it that Alien Thinking will evolve? Here’s a back-of-the-envelope calculation:

First, consider what getting from magnificently complex eukaryotic cells to human-level thinking involved. Achieving human thought required a large part of the Earth’s biomass (roughly 500 billion tons of eukaryotically bound carbon) during approximately 2 billion years. That’s a lot of evolutionary work! True, human-level thinking might have happened in half the time. With a lot of luck, even in 10 percent of the time, but it’s unlikely to have happened any faster. You don’t only need massive amounts of time for evolution to generate complex behavior, you also need a petri dish the size of Earth’s surface to sustain this level of experimentation.

Assume that Alien Thinking will be silicon-based, as all current AI is. A eukaryotic cell is vastly more complex than, say, Intel’s latest i7 CPU chip—both in hardware and software. Further assume that you could shrink that CPU chip to the size of a eukaryote. Leave aside the quantum effects that would stop the transistors from working reliably. Leave aside the question of the energy source. You’d have to cover the globe with 1030 microscopic CPUs and let them communicate and fight for 2 billion years for true thought to emerge.

Yes, processing speed is faster in CPUs than in biological cells, because electrons are easier to shuttle around than atoms. But eukaryotes work massively parallel, whereas Intel’s i7 works only four times parallel (four cores). Eventually, at least to dominate the world, these electrons would need to move atoms to store their software and data in more and more physical places. This would slow their evolution dramatically. It’s hard to say if, overall, silicon evolution will be faster than biological. We don’t know enough about it. I don’t see why this sort of evolution would be more than two or three orders of magnitude faster than biological evolution (if at all)—which would bring the emergence of self-aware Alien AI down to roughly a million years.

What if Humanoid AI becomes so smart it could create Alien AI from the top down? That’s where Leslie Orgel’s Second Rule kicks in: “Evolution is smarter than you are.” It’s smarter than human thinking. It’s even smarter than Humanoid Thinking. And it’s much slower than you think.

Thus, the danger of AI is not inherent to AI but rests on our overreliance on it. Artificial thinking won’t evolve to self-awareness in our lifetime. In fact, it won’t happen in 1,000 years.

I might be wrong, of course. After all, this back-of-the-envelope calculation applies legacy human thinking to Alien AI—which by definition we won’t understand. But that’s all we can do at this stage.

Toward the end of the 1930s, Samuel Beckett wrote in a diary, “We feel with terrible resignation that reason is not a superhuman gift . . . that reason evolved into what it is, but that it also, however, could have evolved differently.” Replace “reason” with “AI” and you have my argument.