WHAT WILL THE PLACE OF HUMANS BE?

PAUL SAFFO

Technology forecaster; consulting associate professor, Stanford University

广告:个人专属 VPN,独立 IP,无限流量,多机房切换,还可以屏蔽广告和恶意软件,每月最低仅 5 美元

The prospect of a world inhabited by robust AIs terrifies me. The prospect of a world without robust AIs also terrifies me. Decades of technological innovation have created a world system so complex and fast-moving that it’s quickly becoming beyond our human ability to comprehend, much less manage. If we’re to avoid civilizational catastrophe, we need more than clever new tools—we need allies and agents.

So-called narrow AI systems have been around for decades. At once ubiquitous and invisible, narrow AIs make art, run industrial systems, fly commercial jets, control rush hour traffic, tell us what to watch and buy, determine whether or not we get a job interview, and play matchmaker for the lovelorn. Add the relentless advance of processing, sensor, and algorithmic technologies and it’s clear that today’s narrow AIs are on a trajectory toward a world of robust AI. Long before artificial superintelligences arrive, evolving AIs will be pressed into performing once unthinkable tasks, from firing weapons to formulating policy.

Meanwhile, today’s primitive AIs tell us much about future human/machine interaction. Narrow AIs may lack the intelligence of a grasshopper, but that hasn’t stopped us from holding heartfelt conversations with them and asking them how they feel. It’s in our nature to infer sentience at the slightest hint that life might be present. Just as our ancestors once populated their world with elves, trolls, and angels, we eagerly seek companions in cyberspace. This is one more impetus driving the creation of robust AIs—we want someone to talk to. The consequence could well be that the first nonhuman intelligence we encounter won’t be little green men or wise dolphins but creatures of our own invention.

We of course will attribute feelings and rights to AIs—and eventually they’ll demand it. In Descartes’s time, animals were considered mere machines—a crying dog was no different from a gear whining for want of oil. Late last year, an Argentine court granted rights to an orangutan as a “nonhuman person.” Long before robust AIs arrive, people will extend the same empathy to digital beings and give them legal standing.

The rapid advance of AIs also is changing our understanding of what constitutes intelligence. Our interactions with narrow AIs will cause us to realize that intelligence is a continuum and not a threshold. Earlier this decade, Japanese researchers demonstrated that slime mold could thread a maze to reach a tasty bit of food. Last year a scientist in Illinois demonstrated that under just the right conditions, a drop of oil could negotiate a maze in an astonishingly lifelike way to reach a bit of acidic gel. As AIs insinuate themselves ever deeper into our lives, we’ll recognize that modest digital entities, along with most of the natural world, carry the spark of sentience. From there, it’s just a small step to speculate about what trees or rocks—or AIs—think.

In the end, the biggest question isn’t whether AI superintelligences will eventually appear. Rather, the question is what will the place of humans be, in a world occupied by an exponentially growing population of autonomous machines. Bots on the Web already outnumber human users. The same will soon be true in the physical world. As Lord Dunsany once cautioned, “If we change too much we may no longer fit into the scheme of things.”