BEYOND THE UNCANNY VALLEY

JOICHI ITO

Director, MIT Media Lab

广告:个人专属 VPN,独立 IP,无限流量,多机房切换,还可以屏蔽广告和恶意软件,每月最低仅 5 美元

You can’t think about thinking without thinking about thinking about something.—SEYMOUR PAPERT

What do I think about machines that think? It depends on what they’re thinking about. I’m clearly in the camp of people who believe that AI and machine learning will contribute greatly to society. I expect we’ll find machines to be exceedingly good at things we’re not—things that involve speed, accuracy, reliability, obedience, massive amounts of data, computation, distributed networking, and parallel processing.

The paradox is that while we’ve been developing machines that behave more and more like humans, we’ve been developing educational systems that push children to think like computers and behave like robots. For our society to scale and grow at the speed we now require, we need reliable, obedient, hardworking physical and computational units. So we spend years converting sloppy, emotional, random, disobedient human beings into meat-based versions of robots. Luckily, mechanical and digital robots and computers will soon help reduce, if not eliminate, the need for people taught to behave like them.

We’ll still need to overcome the fear and even disgust evoked when robot designs bring us closer and closer to the “Uncanny Valley,” in which robots demonstrate almost human qualities without quite reaching them. This is also true for computer animation, zombies, even prosthetic hands. But we may be approaching the valley from both ends. If you’ve ever modified your voice to be understood by a voice-recognition system on the phone, you understand how, as humans, we can edge into the Uncanny Valley ourselves.

There are a number of theories about why we feel this revulsion, but I think it has something to do with humans feeling they’re special—a kind of existential ego. This may have monotheistic roots. Around the time Western factory workers were smashing robots with sledgehammers, Japanese workers were putting hats on the same robots in factories and giving them names. On April 7, 2003, Astro Boy, the Japanese robot character, was made an honorary citizen of the city of Niiza, Saitama. If these anecdotes tell us anything, it’s that animist religions may have less trouble dealing with the idea that maybe we’re not really in charge. If nature is a complex system in which all things—humans, trees, stones, rivers, homes—are all animate in some way, with their own spirits, then maybe it’s OK that God doesn’t really look like us, or think like us, or think we’re all that special.

So perhaps one of the most useful aspects of being alive in the era when we begin to ask this question is that it raises a larger question about the role of human consciousness. Human beings are part of a massively complex system—complex beyond our comprehension. Like the animate trees, stones, rivers, and homes, maybe algorithms running on computers are just another part of this complex ecosystem.

As human beings, we’ve evolved an ego and a belief that there’s such a thing as a self, but that’s largely a deception allowing each human unit to work usefully within the parameters of evolutionary dynamics. Perhaps the morality emerging from it is another deception of sorts; for all we know, we might be living in a simulation, where nothing actually matters. This doesn’t mean we shouldn’t have ethics and good taste; we can exercise our sense of responsibility as part of a complex, interconnected system without having to rely on the argument that “I’m special.” As machines become an increasingly important part of this system, our human arguments about being special will be increasingly fraught. Maybe that’s a good thing.

Perhaps what we think about machines that think doesn’t really matter—they’ll think and the system will adapt. As with most complex systems, the outcome is mostly unpredictable. It is what it is, and will be what it will be. Most of what we think will happen is probably hopelessly wrong—and as we know from climate change, knowing that something’s happening and doing something about it aren’t the same thing.

That might sound defeatist, but I’m actually quite optimistic. I believe that systems are adaptive and resilient and that—whatever happens—beauty, happiness, and fun will persist. We hope human beings will have a role. My guess is that they will.

It turns out that we don’t make great robots, but we’re very good at doing random and creative things that would be impossibly complex—and probably a waste of resources—to code into a machine. Ideally, our educational system will evolve to more fully embrace our uniquely human strengths, rather than trying to shape us into second-rate machines. Human beings—though not necessarily our current form of consciousness and the linear philosophy around it—are good at transforming messiness and complexity into art, culture, and meaning. If we focus on what each of us is best at, humans and machines will develop a wonderful yin-yang sort of relationship, with humans feeding off the efficiency of our solid-state brethren while they feed off our messy, sloppy, emotional, and creative bodies and brains.

We’re descending not into chaos, as many believe, but into complexity. While the Internet connects everything outside us into a vast, seemingly unmanageable system, we find an almost infinite amount of complexity as we dig deeper into our own biology. Much as we’re convinced our brains run the show while our microbiomes alter our drives, desires, and behaviors to support their own reproduction and evolution, it may never be clear who’s in charge—us, or our machines. But maybe we’ve done more damage by believing humans are special than we could by embracing a humbler relationship with the creatures, objects, and machines around us.