DOMINATION VERSUS DOMESTICATION

GARY KLEIN

Psychologist; senior scientist, MacroCognition LLC; author, Seeing What Others Don’t

广告:个人专属 VPN,独立 IP,无限流量,多机房切换,还可以屏蔽广告和恶意软件,每月最低仅 5 美元

Artificial intelligence is commonly used as a tool to augment our own thinking. But the intelligence of systems suggests that AI can and will be more than a tool, more than our servant. What kind of relationship might we expect?

We’re hearing a lot about how superintelligent machines may spell the end of the human race—and that the future relationship between humans and AI will be a contest for domination. Another path, however, is for AI to grow into a collaborator, with the same give-and-take we have with our favorite colleagues. We managed to domesticate wolves into faithful dogs; perhaps we can domesticate AI and avoid a conflict over domination.

Unfortunately, domesticating AI will be much harder than just building faster machines with larger memories and more powerful algorithms for crunching more data. To see why, consider a simple transaction with an everyday intelligent system, a route planner. Imagine you’re using your favorite GPS system to find your way in an unfamiliar area, and the GPS directs you to turn left at an intersection, which strikes you as wrong. If your navigation was being done by a friend in the passenger seat reading a map, you’d ask, “Are you sure?” or perhaps just, “Left?” with an intonation signaling disbelief.

However, you don’t have any way to query your GPS system. These systems, and AI in general, aren’t capable of meaningful explanations. They can’t describe their intentions in a way we’d understand. They can’t adopt our perspective to determine what statement would satisfy us. They can’t convey confidence in the route they’ve selected, other than giving a probabilistic estimate of the time differential for alternative routes, whereas we want them to reflect on the plausibility of the assumptions they’re making. For these and other reasons, AI is not a good partner in joint activities for route planning or most other tasks. It’s a tool, a powerful tool that’s often quite helpful. But it’s not a collaborator.

Many things must happen in order to transform AI from tool to collaborator. One possible starting point is to have AI become trustworthy. The concept of “trust in automation” is somewhat popular at the moment but far too narrow for our purpose. Trust in automation refers to whether the operator can believe the outputs of the automated system or thinks the software may contain bugs or, worse yet, may be compromised. Combatants worry about relying on intelligent systems likely to be hacked. They worry about having to gauge what parts of the system have been affected by an unauthorized intrusion and the ripple effects on the rest of the system.

Accuracy and reliability are important features of collaborators, but trust goes deeper. We trust people if we believe they’re benevolent and want us to succeed. We trust them if we understand how they think, so that we have common ground to resolve ambiguities. We trust them if they have the integrity to admit mistakes and accept blame. We trust them if we have shared values—not the sterile exercise of listing value priorities but dynamic testing of values to see whether we’d make the same tradeoffs when values conflicted with each other. For AI to become a collaborator, it will have to consistently be seen as trustworthy. It will have to judge what kinds of actions will make it appear trustworthy in the eyes of a human partner. If AI systems can move down this domestication path, the doomsday struggle for domination may be avoided.

There’s yet another issue to think about. As we depend more on our smartphones and other devices to communicate, some have worried that our social skills are eroding. People who spend their days on Twitter with a wide range of audiences, year after year, may be losing social and emotional intelligence. They may be taking an instrumental view of others, treating them as tools for satisfying objectives. One can imagine a future in which humans have forgotten how to be trustworthy, forgotten wanting to be trustworthy. If AI systems become trustworthy and we don’t, perhaps domination by AI systems might be a good outcome after all.