YOU ARE WHAT YOU EAT

ANDY CLARK

Philosopher and cognitive scientist, University of Edinburgh; author, Supersizing the Mind: Embodiment, Action, and Cognitive Extension

广告:个人专属 VPN,独立 IP,无限流量,多机房切换,还可以屏蔽广告和恶意软件,每月最低仅 5 美元

A common theme in recent writings about machine intelligence is that the best new learning machines will constitute alien forms of intelligence. I’m not so sure. The reasoning behind the “alien AIs” image usually goes something like this: The best way to get machines to solve hard real-world problems is to set them up as statistically sensitive learning machines able to benefit maximally from exposure to Big Data. Such machines will often learn to solve complex problems by detecting patterns, and patterns among patterns, and patterns within patterns, hidden deep in the massed data streams to which they’re exposed. This will most likely be achieved using deep learning algorithms to mine deeper and deeper into the data streams. After such learning is complete, what results may be a system that works but whose knowledge structures are opaque to the engineers and programmers who set the system up in the first place.

Opaque? In one sense, yes. We won’t (at least without further work) know in detail what has become encoded as a result of all that deep, multilevel, statistically driven learning. But alien? I’m going to take a big punt at this point and road-test a possibly outrageous claim. I suspect that the more these machines learn, the more they’ll end up thinking in ways recognizably human. They’ll end up having a broad structure of humanlike concepts with which to approach their tasks and decisions. They may even learn to apply emotional and ethical labels in roughly the same ways we do. If I’m right, this undermines the common worry that these are emerging alien intelligences whose goals and interests we cannot fathom and that might therefore turn on us in unexpected ways. I suspect that the ways they might turn on us will be all too familiar—and thus, one hopes, avoidable by the usual steps of extending due respect and freedom.

Why would the machines think like us? The reason has nothing to do with our ways of thinking being objectively right or unique. Rather, it has to do with what I’ll dub the Big Data food chain. These AIs, if they’re to emerge as plausible forms of general intelligence, will have to learn by consuming the vast electronic trails of human experience and human interests, for this is the biggest available repository of general facts about the world. To break free of restricted unidimensional domains, these AIs will have to trawl the mundane seas of words and images we lay down on Facebook, Google, Amazon, and Twitter. Where before they may have been force-fed a diet of astronomical objects or protein-folding puzzles, the breakthrough general intelligences will need a richer and more varied diet. That diet will be the massed strata of human experience preserved in our daily electronic media.

The statistical baths in which we immerse these potent learning machines will thus be all too familiar. They will feed off the fossil trails of our own engagements, a zillion images of bouncing babies, bouncing balls, LOLcats, and potatoes that look like the Pope. These are the things they must crunch into a multilevel world model, finding the features, entities, and properties (latent variables) that best capture the streams of data to which they’re exposed. Fed on such a diet, these AIs may have little choice but to develop a world model that has much in common with our own. They’re probably more in danger of becoming Super Mario freaks than Supervillains intent on world domination.

Such a diagnosis (which is tentative and at least a little playful) goes against two prevailing views. First, as mentioned earlier, it goes against the view that current and future AIs are basically alien forms of intelligence feeding off Big Data and crunching statistics in ways that will render their intelligences increasingly opaque to human understanding. Second, it questions the view that the royal route to human-style understanding is human-style embodiment, with all the interactive potentialities (to stand, sit, jump, etc.) that that implies. For although our own typical route to understanding the world goes via a host of such interactions, theirs might not. Such systems will doubtless enjoy some (probably many and various) means of interacting with the physical world. These encounters will be combined, however, with exposure to rich information trails reflecting our own modes of interaction with the world. So it seems possible that they could come to understand and appreciate soccer and baseball just as much as the next person. An apt comparison here might be with a differently abled human being.

There’s lots more to think about here, of course. For example, the AIs will see huge swaths of human electronic trails and thus be able to discern patterns of influence among them over time. That means they may come to model us less as individuals and more as a kind of complex distributed system. That’s a difference that might make a difference. And what about motivation and emotion? Maybe these depend essentially on features of our human embodiment, such as gut feelings and visceral responses to danger. Perhaps—but notice that these features of human life have themselves left fossil trails in our electronic repositories.

I might be wrong. But at the very least, I think we should think twice before casting our homegrown AIs as emerging forms of alien intelligence. You are what you eat, and these learning systems will have to eat us. Big time.