THOUGHT-STEALING MACHINES

MAXIMILIAN SCHICH

Art historian; associate professor for art and technology, University of Texas, Dallas

广告:个人专属 VPN,独立 IP,无限流量,多机房切换,还可以屏蔽广告和恶意软件,每月最低仅 5 美元

Machines increasingly do things we previously considered thinking, but that we don’t do anymore because now machines do them. I stole this thought more or less accurately from Danny Hillis, father of the Connection Machine and the Knowledge Graph. Stealing thoughts is a common activity in the thought processes of both humans and machines. Indeed, when we humans think, much of the content of our thoughts comes from past experience or the documented experience of others. Very rarely do we come up with something completely new. Our machines aren’t much different. What is called cognitive computing is in essence nothing but a sophisticated thought-stealing mechanism, driven by a vast amount of knowledge and a complicated set of algorithmic processes. Such thought-stealing processes, in both human(istic) thought and cognitive computing, are impressive, as they’re able to steal not just existing thoughts but also potential thoughts that are reasonable or based on a given corpus of knowledge.

Today, thought-stealing machines can produce scholarly texts indistinguishable from “postmodern thought,” computer-science papers that get accepted in conferences, or compositions that experts cannot discern from works by classical composers. As in weather forecasting, machines now can produce many different cognitive representations based on expectations derived from documents about the past or similar situations. Renaissance antiquarians would be delighted, as these machines are a triumph of the very methods that gave rise to modern archaeology and many other branches of science and research. But how impressed should we really be?

Our machines get more and more sophisticated and so do their results. But as we build better and better machines, we also learn more and more about nature. In fact, natural cognition is likely much more complex and detailed than our current incarnations of artificial intelligence or cognitive computing. For example, how sophisticated do we have to imagine natural cognition, when quantum coherence at room temperature can help birds in our garden sense the magnetic field? How complex do we have to imagine embodied cognition in octopi, when it’s possible to build Turing Machines made exclusively of artificial muscles? How should we answer these questions, when we’re still far from recording in full detail what’s going on in our brains? My guess is that in 200 years our current thinking machines will look as primitive as the original Mechanical Turk.

However sophisticated they may become, our machines are still primitive compared to the resolution and efficiency of natural cognition. Like protobiotic metabolism, they’re below a critical threshold of real life. But they’re powerful enough that we can enter a new era of exploration. Our machines allow us to produce many more thoughts than were ever produced before, with innovation becoming an exercise in finding the right thought in the set of all possible thoughts. As much as having our own ideas, ingenuity will lie in the proper exploration of such ready-made sets of thought. Measuring the cognitive space of all possible thoughts will be as awe-inspiring as astronomy’s exploration of the universe. Maybe Mahler’s potential Sixtieth is as awesome as his Sixth.