THE ODDS ON AI

ANTHONY AGUIRRE

Associate professor of physics, UC Santa Cruz

广告:个人专属 VPN,独立 IP,无限流量,多机房切换,还可以屏蔽广告和恶意软件,每月最低仅 5 美元

I attribute an unusually low probability to the near-future prospect of general-purpose AI—by which I mean an AI that can formulate abstract concepts based on experience, reason and plan using those concepts, and take action based on the results. We have exactly one example of technological-level intelligence arising, and it has done so through millions of generations of information-processing agents interacting with an incredibly rich environment of other agents and structures that have similarly evolved.

I suspect there are many intricately interacting, hierarchically structured organizational levels involved, from subneuron to the brain as a whole. My suspicion is that replicating the effectiveness of this evolved intelligence in an artificial agent will require amounts of computation not that much lower than evolution has required, which would far outstrip our abilities for many decades, even given exponential growth in computational efficiency per Moore’s Law—and that’s even if we understood how to correctly employ that computation.

I assign a probability of about 1 percent for artificial general intelligence (AGI) arising in the next ten years, and about 10 percent over the next thirty years. (This essentially reflects a probability that my analysis is wrong, times a probability more representative of AI experts, who—albeit with lots of variation—tend to assign somewhat higher numbers.)

On the other hand, I assign a rather high probability that, if AGI is created (and especially if it arises relatively quickly), it will be—in a word—insane. Human minds are incredibly complex but have been battle-tested into (relative) stability over eons of evolution in a variety of extremely challenging environments. The first AGIs are unlikely to have been honed in this way. Like the human systems, narrow AIs are likely to become more “general” by researchers cobbling together AI components (like visual-field, or text-processing, symbolic manipulation, optimization algorithms, etc.), along with currently nonexistent systems for much more efficient learning, concept abstraction, decision making, etc.

Given trends in the field, many of these will probably be rather opaque deep learning or similar systems that are effective but somewhat inscrutable. In the first systems, I’d guess that these will just barely work together. So I think the a priori likelihood of early AGIs doing just what we want them to is quite small.

In this light, there’s a tricky question of whether AGIs quickly lead to superintelligent AIs (SIs). There’s emerging consensus that AGI essentially implies SI. While I largely agree, I’d add the caveat that progress may well stall for a while at the near-human level until something cognitively stable can be developed, and that the AGI, even if somewhat unstable, must still be high-functioning enough to self-improve its intelligence.

Neither case, however, is all that encouraging. The superintelligence that arises could well be flawed in various ways, even if effective at what it does. This intuition is perhaps not far removed from the various scenarios in which superintelligence goes badly awry (taking us with it), often for lack of what we might call common sense. But this common sense is in part a label for the stability we’ve built up as part of an evolutionary and social ecosystem.

So even if AGI is a long way away, I’m deeply pessimistic about what will happen by default if we get it. I hope I’m wrong, but time will tell. (I don’t think we can—or should!—try to stop the development of AI generally. It will do a multitude of great things.)

Meanwhile, I hope that on the way to AGI, researchers will put a lot of thought into how to dramatically lower the probability that things will go wrong once we arrive. In this arena, where the stakes are potentially incredibly high, I’m frustrated when I hear, “I think x is what’s going to happen, so I’m not worried about y.” That’s generally a fine way to think, as long as your confidence in x is high and y isn’t superimportant. But when you’re talking about something that could radically determine the future (or future existence of) humanity, 75 percent confidence isn’t enough. Nor is 90 percent enough, or 99 percent! We’d never have built the Large Hadron Collider if there was a 1 percent (let alone 10 percent) chance of its actually spawning black holes that consumed the world—there were, instead, extremely compelling arguments against that. Let’s see whether those compelling reasons not to worry about AGI exist, and if not, let’s make our own.