DENKRAUMVERLUST

TIMOTHY TAYLOR

Professor of the prehistory of humanity, University of Vienna; author, The Artificial Ape: How Technology Changed the Course of Human Evolution

广告:个人专属 VPN,独立 IP,无限流量,多机房切换,还可以屏蔽广告和恶意软件,每月最低仅 5 美元

The human mind has a tendency to confuse things with their signs. There’s a word for this tendency—Denkraumverlust—used by art historian Aby Warburg (1866–1929) and literally translatable as “loss of thinking space.” Part of the appeal of machines that think is that they would not be subject to this, being more logical than we are. On the other hand, they’re unlikely to invent a word or concept such as Denkraumverlust. So what we think about machines that think depends on the type of thinking we’re thinking about, but also on what we mean by “machine.” In the category of machines that think, we’re confusing the sign—or representation—of thinking with the thing itself. And if we assume that a machine is something produced by humans, we underestimate the degree to which machines produce us and the fact that thought has long emerged from this interaction, properly belonging to neither side. (Thinking there are sides may be wrong, too.)

Denkraumverlust can help us understand not just the positive response of some Turing testers to conversations with the Russian–Ukrainian computer program “Eugene Goostman” but also the apparently very different case of the murderous response to cartoons depicting Mohammed. Both illustrate how excitable and even gullible we can be when presented with something that appears to represent something else so well that signifier and signified are conflated.

The Turing Test requires that a machine be indistinguishable from a human respondent by being able to imitate communication (rather than actually think for itself). But if an enhanced Eugene Goostman insisted that it was thinking its own thoughts, how would we know that it really was? If it knew it was supposed to imitate a human mind, how could we distinguish some conscious pretense from the imitation of pretense? Ludwig Wittgenstein used pretense as a special category in discussing the possibility of knowing the status of other minds, asking us to consider a case where someone believes, falsely, that they’re pretending. The possibility of correctly assessing Turing Test results in relation to the possibility of independent artificial thought is core Wittgenstein territory: We can deduce that, in his view, all assessment must be doomed to failure, as it necessarily involves data of an imponderable type.

Denkraumverlust is about unmediated response. Although sophisticated art audiences can appreciate the attempt to fool as part of aesthetic experience (enjoying a good use of three-dimensional perspective on a canvas known to be flat, for example), whenever deception is actually successful, reactions are less comfortable. Cultures regularly censor images thought to have the power to short-circuit our reasoned and reflective responses. Mostly the images are either violent or erotic, but they can also be devotional. Such images, if allowed, can produce a visceral and unmediated reaction appropriate to a real situation. New, unfamiliar representational technologies have a habit of taking us by surprise (when eighteenth-century French sailors gave mirrors to aboriginal Tasmanians, things got seriously out of order; later anthropologists had similar trouble with photographs).

A classic example of artificially generated confusion is the legendary sculptor Pygmalion, who fell passionately and inappropriately in love with a statue of a goddess that he had carved himself. In the wake of the Pygmalion myth came classical and medieval Arabic automata so realistic, novel, and fascinating in sound and movement that people, although briefly, could be persuaded that they were actually alive. Machines that think are in this Barnum & Bailey tradition. Like Pygmalion’s sculpture, they project an image, albeit not a visual one. Even if they’re not dressed up to look like cyborg goddesses, they’re representations of us. They’re designed to represent information (often usefully reordered) in terms we find coherent, whether mathematical, statistical, translational, or, as in the Turing Test, conversational.

But the idea of a thinking machine is a false turn. Such objects, however powerfully they may be enabled to elicit unmediated responses from us, will remain automata. The truly significant developments in thought will arise, as they always have, in a biotechnical symbiosis. This distinctively human story is easy to follow in the body (wheeled transport is one of many mechanical inventions that have enabled human skeletons to become lighter) but is probably just as present in the brain (the invention of writing as a form of external intellectual storage may have reduced selection pressure on some forms of innate memory capacity while stimulating others).

In any case, the separate terms human and machine produce their own Denkraumverlust—a loss of thinking space encouraging us to accept as real an unreal dualism. Practically, it’s only the long-term evolution of information technology, from the earliest representations and symbolic constructs to the most advanced current artificial brain, that allows the advancement of thought.