THE AIRBUS AND THE EAGLE

DANIEL L. EVERETT

Linguist; dean of arts and sciences, Bentley University; author, Language: The Cultural Tool

广告:个人专属 VPN,独立 IP,无限流量,多机房切换,还可以屏蔽广告和恶意软件,每月最低仅 5 美元

The more we learn about cognition, the stronger becomes the case for understanding human thinking as the nexus of several factors, as the emergent property of the interaction of the human body, human emotions, culture, and the specialized capacities of the entire brain. One of the greatest errors of Western philosophy was to buy into the Cartesian dualism of the famous statement, “I think, therefore I am.” It’s no less true to say, “I burn calories, therefore I am.” Even better would be to say, “I have a human evolutionary history, therefore I can think about the fact that I am.”

The mind is never more than a placeholder for things we don’t understand about how we think. The more we use the solitary term mind to refer to human thinking, the more we underscore our lack of understanding. At least this is an emerging view of many researchers in fields as varied as neuroanthropology, emotions research, embodied cognition, radical embodied cognition, dual-inheritance theory, epigenetics, neurophilosophy, and the theory of culture.

For example, in the laboratory of Professor Martin Fischer at the University of Potsdam, interesting research is being done on the connection of the body and mathematical reasoning. Stephen Levinson’s group at the Max Planck Institute for Psycholinguistics in Nijmegen has shown how culture can affect navigational abilities, a vital cognition function of most species. In my own research, I’m looking at the influence of culture on the formation of what I refer to as “dark matter of the mind,” a set of knowledges, orientations, biases, and patterns of thought that affect our cognition profoundly and pervasively.

If human cognition is indeed a property that emerges from the intersection of our physical, social, emotional, and data-processing abilities, then intelligence as we know it in humans is almost entirely unrelated to “intelligence” devoid of these properties.

I believe in artificial intelligence as long as we realize it’s artificial. Comparing computation-problem solving, chess playing, reasoning, and so on to human thinking is like comparing the flight of an Airbus 320 to an eagle’s. It’s true that they both temporarily defy the pull of gravity, that they’re both subject to the physics of the world in which they operate, and so on, but the similarities end there. Bird flight and airplane flight shouldn’t be confused.

The reasons artificial intelligence isn’t real intelligence are many. First, there’s meaning. Some claim to have solved this problem, but they haven’t, really. This “semantics problem” is, as John Searle pointed out years ago, why a computer running a translation program converting English into Mandarin speaks neither English nor Mandarin. No computer can learn a human language—only bits and combinatorics for special purposes. Second, there’s the problem of what Searle calls the background and what I refer to as dark matter, or what some philosophers intend by the phrase tacit knowledge.

We learn to reason in a cultural context, whereby culture means a system of violable, ranked values, hierarchically structured knowledges, and social roles. We can do this not only because we have an amazing ability to perform what appears to be Bayesian inferencing across our experiences but also because of our emotions, our sensations, our proprioception, and our strong social ties. There’s no computer with cousins and opinions about them.

Computers may be able to solve a lot of problems. But they cannot love. They cannot urinate. They cannot form social bonds because they’re emotionally driven to do so. They have no romance. The popular idea that we may someday be able to upload our memories to the Internet and live forever is silly—we’d need to upload our bodies as well. The idea that comes up in discussions about artificial intelligence—that we should fear that machines will control us—is but a continuation of the idea of the religious “soul,” cloaked in scientific jargon. It detracts from real understanding.

Of course, one ought never to say what science cannot do. Artificial intelligence may one day become less artificial by re-creating bodies, emotions, social roles, values, and so on. But until it does, it will still be useful for vacuum cleaners, calculators, and cute little robots that talk in limited, trivial ways.