A TURNING POINT IN ARTIFICIAL INTELLIGENCE

STEVE OMOHUNDRO

Scientist, Self-Aware Systems; cofounder, Center for Complex Systems Research, University of Illinois

广告:个人专属 VPN,独立 IP,无限流量,多机房切换,还可以屏蔽广告和恶意软件,每月最低仅 5 美元

Last year appears to have been a turning point for AI and robotics. Major corporations invested billions of dollars in these technologies. AI techniques, like machine learning, are now routinely used for speech recognition, translation, behavior modeling, robotic control, risk management, and other applications. McKinsey predicts that these technologies will create more than $50 trillion of economic value by 2025. If this is accurate, we should expect dramatically increased investment soon.

The recent successes are being driven by cheap computer power and plentiful training data. Modern AI is based on the theory of “rational agents,” arising from work on microeconomics in the 1940s by John von Neumann and others. AI systems can be thought of as trying to approximate rational behavior using limited resources. There’s an algorithm for computing the optimal action for achieving a desired outcome, but it’s computationally expensive. Experiments have found that simple learning algorithms with lots of training data often outperform complex hand-crafted models. Today’s systems primarily provide value by learning better statistical models and performing statistical inference for classification and decision making. The next generation will be able to create and improve their own software and are likely to self-improve rapidly.

In addition to improving productivity, AI and robotics are drivers for numerous military and economic arms races. Autonomous systems can be faster, smarter, and less predictable than their competitors. The year 2014 saw the introduction of autonomous missiles, missile defense systems, military drones, swarm boats, robot submarines, self-driving vehicles, high-frequency trading systems, and cyberdefense systems. As these arms races play out, there will be tremendous pressure for rapid system development, which may lead to faster deployment than would be otherwise desirable.

In 2014 there was also an increase in public concern over the safety of these systems. A study of their likely behavior by studying approximately rational systems undergoing repeated self-improvement shows that they tend to exhibit a set of natural subgoals called “rational drives” that contribute to the performance of their primary goals. Most systems will better meet their goals by preventing themselves from being turned off, acquiring more computational power, creating multiple copies of themselves, and amassing more financial resources. They’re likely to pursue these drives in harmful, antisocial ways unless they’re carefully designed to incorporate human ethical values.

Some have argued that intelligent systems will somehow automatically be ethical. But in a rational system, the goals are completely separable from the reasoning and models of the world. Beneficial intelligent systems can be redeployed with harmful goals. Harmful goals—seeking to control resources, say, or to thwart other agents’ goals, or to destroy other agents—are unfortunately easy to specify. It will therefore be critical to create a technological infrastructure that detects and controls the behavior of harmful systems.

Some fear that intelligent systems will become so powerful that they’re impossible to control. This is not true. These systems must obey the laws of physics and of mathematics. Seth Lloyd’s analysis of the computational power of the universe shows that even the entire universe, acting as a giant quantum computer, could not discover a 500-bit hard cryptographic key in the time since the Big Bang.1 The new technologies of postquantum cryptography, indistinguishability obfuscation, and blockchain smart contracts are promising components for creating an infrastructure secure against even the most powerful AIs. But recent hacks and cyberattacks show that our current computational infrastructure is woefully inadequate to the task. We need to develop a software infrastructure that’s mathematically provably correct and secure.

There have been at least twenty-seven different species of hominids, of which we’re the only survivors. We survived because we found ways to limit our individual drives and work together cooperatively. The human moral emotions are an internal mechanism for creating cooperative social structures. Political, legal, and economic structures are an external mechanism for the same purpose.

We need to extend both of these to AI and robotic systems. We need to incorporate human values into their goal systems to create a legal and economic framework that incentivizes positive behavior. If we can successfully manage these systems, they could improve virtually every aspect of human life and provide deep insights into issues like free will, consciousness, qualia, and creativity. We face a great challenge, but we have tremendous intellectual and technological resources to build upon.