DIRECTIONLESS INTELLIGENCE

EDWARD SLINGERLAND

Professor of Asian studies, Canada Research Chair in Chinese Thought and Embodied Cognition, University of British Columbia; author, Trying Not to Try

广告:个人专属 VPN,独立 IP,无限流量,多机房切换,还可以屏蔽广告和恶意软件,每月最低仅 5 美元

I don’t think much about them—other than that they serve as a useful existence proof that thought doesn’t require some mystical extra “something” that mind/body dualists continue to embrace.

I’ve always been baffled by fears about AI machines taking over the world; these fears seem to be based on a fundamental (though natural) intellectual mistake. When we conceptualize a superpowerful Machine That Thinks, we draw upon the best analogy at hand: us. So we tend to think of AI systems as just like us only much smarter and faster.

This is, however, a bad analogy. A better one would be a really powerful, versatile screwdriver. No one worries about superadvanced screwdrivers rising up and overthrowing their masters. AI systems are tools, not organisms. No matter how good they become at diagnosing diseases or vacuuming our living rooms, they don’t actually want to do any of these things. We want them to, and we then build these wants into them.

It’s also a category mistake to ask what Machines That Think might be thinking about. They aren’t thinking about anything—the “aboutness” of thinking derives from the intentional goals driving the thinking. AI systems, in and of themselves, are entirely devoid of intentions or goals. They have no emotions, they feel neither empathy nor resentment. While such systems might someday be able to replicate our intelligence—and there seems to be no a priori reason why this would be impossible—this intelligence would be completely lacking in direction, which would have to be provided from the outside.

Motivational direction is the product of natural selection working on biological organisms. Natural selection produced our rich and complicated set of instincts, emotions, and drives to maximize our ability to get our genes into the next generation, a process that’s left us saddled with all sorts of goals, including desires to win, dominate, and control. While we may want to win for perfectly good evolutionary reasons, machines couldn’t care less. They just manipulate 0s and 1s, as programmed to do by the people who want them to win. Why on earth would an AI system want to take over the world? What would it do with it?

What is scary as hell is the idea of an entity possessed of extrahuman intelligence and speed and our motivational system—in other words, human beings equipped with access to powerful AI systems. But smart primates with nuclear weapons are just as scary, and we’ve managed to survive such a world so far. AI is no more threatening in and of itself than a nuclear bomb—it’s a tool, and the only things to be feared are the creators and wielders of such tools.