THE CONTROL CRISIS

NICHOLAS G. CARR

Author, The Shallows: What the Internet Is Doing to Our Brains

广告:个人专属 VPN,独立 IP,无限流量,多机房切换,还可以屏蔽广告和恶意软件,每月最低仅 5 美元

Machines that think think like machines. That fact may disappoint those who look forward, with dread or longing, to a robot uprising. For most of us, it’s reassuring. Our thinking machines aren’t about to leap beyond us intellectually, much less turn us into their servants or pets. They’re going to continue to do the bidding of their human programmers.

Much of the power of artificial intelligence stems from its very mindlessness. Immune to the vagaries and biases that attend conscious thought, computers can perform their lightning-quick calculations without distraction or fatigue, doubt or emotion. The coldness of their thinking complements the heat of our own.

Where things get sticky is when we start looking to computers to perform not as our aids but as our replacements. That’s what’s happening now, and quickly. Thanks to advances in artificial intelligence routines, today’s thinking machines can sense their surroundings, learn from experience, and make decisions autonomously, often at a speed and with a precision beyond our ability to comprehend, much less match. When allowed to act on their own in a complex world, whether embodied as robots or simply outputting algorithmically derived judgments, mindless machines carry enormous risks along with their enormous powers. Unable to question their own actions or appreciate the consequences of their programming—unable to understand the context in which they operate—they can wreak havoc, either as a result of flaws in their programming or through the deliberate aims of their programmers.

We got a preview of the dangers of autonomous software on the morning of August 1, 2012, when Wall Street’s biggest trading outfit, Knight Capital, switched on a new, automated program for buying and selling shares. The software had a bug hidden in its code, and it immediately flooded exchanges with irrational orders. Forty-five minutes passed before Knight’s programmers were able to diagnose and fix the problem. Forty-five minutes isn’t long in human time, but it’s an eternity in computer time. Oblivious to its errors, the software made more than 4 million deals, racking up $7 billion in errant trades and nearly bankrupting the company. Yes, we know how to make machines think. What we don’t know is how to make them thoughtful.

All that was lost in the Knight fiasco was money. As software takes command of more and more economic, social, military, and personal processes, the costs of glitches, breakdowns, and unforeseen effects will only grow. Compounding the dangers is the invisibility of software code. As individuals and as a society, we increasingly depend on artificial intelligence algorithms we don’t understand. Their workings, and the motivations and intentions that shape their workings, are hidden from us. That creates an imbalance of power, and it leaves us open to clandestine surveillance and manipulation. Last year, we got some hints about the ways that social networks conduct secret psychological tests on their members through the manipulation of information feeds. As computers become more adept at monitoring us and shaping what we see and do, the potential for abuse grows.

During the nineteenth century, society faced what the late historian James Beniger described as a “crisis of control.”3 The technologies for processing matter had outstripped the technologies for processing information, and people’s ability to monitor and regulate industrial and related processes had in turn broken down. The control crisis, which manifested itself in everything from train crashes to supply-and-demand imbalances to interruptions in the delivery of government services, was eventually resolved through the invention of systems for automated data processing, such as the punch-card tabulator that Herman Hollerith built for the U.S. Census Bureau. Information technology caught up with industrial technology, enabling people to bring back into focus a world that had gone blurry.

Today we face another control crisis, though it’s the mirror image of the earlier one. What we’re now struggling to bring under control is the very thing that helped us reassert control at the start of the twentieth century: information technology. Our ability to gather and process data, to manipulate information in all its forms, has outstripped our ability to monitor and regulate data processing in a way that suits our societal and personal interests. Resolving this new control crisis will be one of the great challenges in the years ahead. The first step in meeting the challenge is to recognize that the risks of artificial intelligence don’t lie in some dystopian future. They are here now.