ARE WE GOING IN THE WRONG DIRECTION?

SCOTT ATRAN

Anthropologist, Centre National de la Recherche Scientifique, Paris; author, Talking to the Enemy: Violent Extremism, Sacred Values, and What It Means to Be Human

广告:个人专属 VPN,独立 IP,无限流量,多机房切换,还可以屏蔽广告和恶意软件,每月最低仅 5 美元

Machines can perfectly imitate some of the ways humans think all of the time, and can consistently outperform humans on some thinking tasks all of the time, but computing machines as usually envisioned will not get human thinking right all of the time, because they process information in ways opposite to humans’ in domains associated with human creativity.

Machines can faithfully imitate the results of some human thought processes whose outcomes are fixed (remembering people’s favorite movies, recognizing familiar objects) or dynamic (jet piloting, grandmaster chess play). And machines can outperform human thought processes, in short time and with little energy, in matters both simple (memorizing indefinitely many telephone numbers) and complex (identifying, from trillions of global communications, social networks whose members may be unaware that they’re part of the network).

However underdeveloped now, I see no principled reason why machines operating independently of direct human control cannot learn from people’s—or their own—fallibilities and so evolve, create new forms of art and architecture, excel in sports (some novel combination of Deep Blue and Oscar Pistorius), invent new medicines, spot talent and exploit educational opportunities, provide quality assurance, or even build and use weapons that destroy people but not other machines.

But if the current focus in artificial intelligence and neuroscience persists, which is to reliably identify patterns of connection and wiring as a function of past connections and forward probabilities, then I don’t think machines will ever be able to capture (imitate) critically creative human thought processes, including novel hypothesis formation in science or even ordinary language production.

Newton’s laws of motion or Einstein’s insights into relativity meant imagining ideal worlds without precedent in any past or plausible future experience, such as moving in a world without friction or chasing a beam of light through a vacuum. Such thoughts require levels of abstraction and idealization that disregard, rather than assimilate, as much information as possible to begin with.

Increasingly sophisticated and efficient patterns of input and output, using supercomputers accessing massive data sets and constantly refined by Bayesian probabilities or other statistics based on degrees of belief in states of nature, may well produce ever better sentences and translations or pleasing melodies and novel techno variations. In this way, machines may come to approximate, through a sort of reverse engineering, what human children or experts effortlessly do when they begin with fairly well-articulated internal structures in order to interpret relevant input from an otherwise impossibly noisy world. Humans know from the outset what they’re looking for through the noise: In a sense, people are there before they start. Computing machines can never be sure that they’re there.

Can machines operating independently of direct human control consistently interact with humans in ways such that the humans believe themselves to be interacting with another human? Machines can come vanishingly close in many areas and surpass mightily in others, but just as even the most highly skilled con artist always has some probability—however small—of being caught in deception, whereas the honest person never deceives and so can never be caught, so the associationist-connectionist machine that operates on stochastic rather than structure-dependent principles may never quite get the sense or sensibility of it all.

In principle, structurally richer machines, with internal architecture—beyond “read,” “write,” and “address”—can be built (indeed, earlier advocates of AI added logical syntax), interact with some degree of fallibility (for if no error, then no learning is possible), and culturally evolve. But the current emphasis in much AI and neuroscience, which is to replace posits of abstract psychological structures with physically palpable neural networks and the like, seems to be going in precisely the wrong direction.

Rather, the cognitive structures that psychologists posit (provided they’re descriptively adequate, plausibly explanatory, and empirically tested against alternatives and the null hypothesis) should be the point of departure—what it is that neuroscience and machine models of the mind should be looking for. If we then discover that different abstract structures operate through the same physical substrate, or that similar structures operate through different substrates, then we have a novel and interesting problem that may lead to a revision in our conception of both structure and substrate. The fact that such simple and basic matters as these are puzzling (or even excluded, a priori, from the puzzle) tells us how very primitive still is the science of mind, whether human brain or machine.